The following is the text of a recent piece I wrote for a research project in which I am involved, but which may well have wider relevance. There is much more that could be written on this, but I hope it provides helpful guidance on some of the more important issues that need to be addressed by researchers with respect to safeguarding and the use of digital technologies, especially following the rapid expansion of their use since the start of the COVID-19 pandemic.
The COVID-19 pandemic has had significant impacts on the lives of most people, with social interactions becoming mediated much more through digital technologies than was heretofore the case. Digital technologies are usually promoted as always having positive impacts, or at worst being benign. This, though, is an unrealistic picture, and their ability to accelerate and accentuate negative as well as positive behaviours and experiences gives rise to significant safeguarding concerns. While the term safeguarding is traditionally used to refer only to children and vulnerable adults, it is now used much more widely and in a sense everyone without good knowledge of how to use digital tech safely is subject to potential harm from its use. The relevant safeguarding issues can broadly be divided into those relating to the self, those relating to others, and those impacting the environment.
Taking care of oneself
Many people have experienced a considerable increase in the time they spend using digital technologies as a result of COVID-19, not all of which is healthy or indeed safe. Those responsible for safeguarding should always check that their staff enjoy the use of digital technologies safely, securely and wisely.
Time out. Try to spend at least one day a week without use of any digital devices or connectivity; learn again the joys of the physical world, and the beauties of nature.
Digital duration. How long do you spend using digital technologies each day? This can be very harmful both to the body and to the mind. Think about installing a screen-time checker that will let you know! Ensure your working environment (desk, screens, chairs) is appropriate for your body. Give your eyes some time to relax and explore distant horizons. Get up and take a short walk at least every hour.
Office hours. Digital tech is frequently used to extend work time, and this has been exacerbated during COVID-19 lockdowns. Set yourself appropriate office hours, and don’t respond to e-mails out of these. Submit a formal complaint if your boss insists you respond to e-mails all hours of the day and night. When working across time zones always ask for arrangements to be made that can include you at appropriate times. Don’t self-exploit (unless you really want to).
Privacy. The more time you spend on digital technologies, the more information you give to others about yourself. You might be happy to be merely a data point, but if not don’t just automatically click on accept when asked about permissions or cookies. Think about using software that limits how much others can find out about you. Use search engines that offer privacy such as DuckDuckGo.
Security. Always install relevant protection (antivirus, web, ransomware, privacy, malicious traffic) and use it (techradar Guide). Create complex passwords. Be aware of scams, spam and phishing attacks, and keep safe by never simply clicking on links without checking that they are safe – especially if they are from people not know to you.
Online conferences and meetings. Be very careful in online calls, especially if you cannot see everyone. In face-to-face meetings it is possible to pick up signs of how people react to what you are saying and thus adjust in real time, but this is impossible with most video calls. It is thus easy to cause offence without meaning to. Don’t waste your time attending the thousands of online conferences or meetings that you are invited to – most are a complete waste of time. Only join ones that are critical to what you are doing, or are of real interest. Think of limiting your time to c. 8 hours of online meetings a week, and spend the rest doing things that are productive and worthwhile!
Use social media carefully. Social media can be great for connecting with people that you want to, but it can also be deeply hurtful and the cause of much violence. Be careful over what you write, and avoid using it if you are angry or tired. If you are trolled, never reply because it only exacerbates the attacks. Don’t just accept anyone as an online friend unless you know who they are. Take time away from social media. Read guides on wise use of social media (such as that produced by Greater Good at Berkeley). Report any abuse or harassment to the appropriate authorities (see how to respond to digital violence).
Behave wisely. Remember that it is almost certain that some-one/thing, somewhere is almost certainly tracking and recording in some way everything you do online. Do not be the one who causes harm to others online.
Do to others as you would have them do to you – but remember they are different from you
In many ways, safeguarding advice relating to others is the application of the above principles to everyone else, but especially to members of the team of which you are a part, and all those with whom you are researching. It is crucially important to remember, though, that what is deemed to be acceptable use to some may not be acceptable to others. There is as yet little global agreement on what is acceptable behaviour in using digital technologies.
Within a team
General advice that is often seen as being helpful for avoiding digital harm includes:
Always listen more than you speak in digital meetings – and do listen, rather than doing all the other digital things you need to catch up on;
Never impose one particular technology on everyone in the team – try to reach consensus but if someone will not use one particular app or device, find an alternative solution;
Never expect an immediate response to an e-mail, or on social media – if you wish to send e-mails at four in the morning your time do not expect others in your time-zone to respond;
Dramatically reduce the number of online meetings that you think should be held, especially when working across time-zones – most are an excuse to pretend people are working, most are poorly managed, and it is much more efficient to seek input on policy documents by sharing drafts (if relevant using multi-authoring tools) than it is to do so in an online meeting;
Be accepting of varying cultural digital practices, but make it clear if any of these offend you and explain why; and
Be strict in clamping down on any use of digital technologies for sexual harassment or other forms of abuse – these should be reported immediately through standard existing safeguarding procedures;
Find ways to mitigate the personal costs to team members of using digital technologies – remember that costs of internet connectivity, hardware and apps can be high for individuals, especially in economically poorer contexts;
Always ensure that team members are fully trained in how to use the digital technologies chosen by the team, and are fully aware of protocols concerning security, safety and privacy;
Ensure that all material and data relating to the team’s research activity is kept as digitally secure as possible, encrypted on trustworthy servers, and with strong password protection;
Always explain if you are recording a meeting, and do not do so if any team member objects – also, don’t be critical of those who object for whatever reason.
With research participants
Always explain to research participants how you will use and protect any digital data that you generate together;
If you need participants to use any digital technologies, ensure that they are fully trained in their use;
Never force participants to use a specific piece of digital technology (or app) – always try to use the technologies with which they are familiar;
Never use digital surveillance or tracking mechanisms without the explicit and fully informed permission of participants – and even then try to find an alternative (you never know who else might be accessing the information);
Do all you can to protect participants from harm or abuse from their use of digital technologies.
Think about the environmental impact of the digital technologies you are using, and mitigate their harms
Digital technologies are often seen as a good way to reduce environment harm, but this is by no means always so, and many practices in the digital technology sector are anti-sustainable (see further here). Those who consider that environmental harm should be included within safeguarding should be aware of the following:
The ICT sector contributes more carbon to the atmosphere than does the airline industry (see here from 2017) – virtual conferences are not carbon-neutral;
Video uses much more bandwidth and electricity than does audio (see here) –encourage participants in online meetings to keep their video off when they are not speaking;
Never contribute to e-waste by purchasing new digital tech just for the sake of the research grant – do all you can to repair and reuse your digital tech, and only purchase new when you absolutely have to (see the The Restart Project for examples and evidence);
Use digital tech (both hardware and software) that is as environmentally sustainable as possible;
Minimise the use of electricity (including in data servers, device production, and device usage), and where possible use renewable powered energy (such as solar mobile devices);
Purchase and use digital tech(including apps) from companies that are committed to minimal environmental impact (not just satisfying carbon emissions criteria);
Always switch off digital tech when not in use, and don’t just put them on standby;
Consider conducting an environmental audit of all digital tech used in your research.
For too long, research on the inter-relationships between digital technologies and the physical environment has been partitioned into neat areas and specialisms that have prevented important things being said holistically about its impact. This has meant that the digital technology sector has invariably been able to make unfounded claims about its positive benefits for the environment and its contributions to the so-called Sustainable Development Goals. The UNESCO Chair in ICT4D is now bringing together researchers from across the world to build a coalition of excellence to unravel the complexities of these relationships and make rigorous policy recommendations to ensure that such technologies are used wisely in the interests of the human community and planet.
Professor Tim Unwin CMG, Chairholder, UNESCO Chair in ICT4D
This is an invitation to researchers and practitioners from all relevant disciplines and all personal backgrounds to work together in a coalition that will enhance our holistic understanding of the inter-relationships between digital technologies and the physical environment. All interested parties are invited to submit short expressions of interest through the UNESCO Chair in ICT4D’s contact page.
There has been extensive research on many aspects of the environmental impact of digital technologies, but much of this has been discipline specific, and as yet there remains no overarching holistic model or understanding of these impacts (see below for some examples of current initiatives). Worse still, studies and initiatives that have claimed to do so, through for example focusing on reducing CO2 emissions, have been misleading because they fail sufficiently to take into account the wider environmental impact of alternative provision of the essential energy to power digital tech. Some of these issues have been highlighted in a series of posts by Tim Unwin on Digital Technologies and Climate Change in January 2020 and include:
A failure to incorporate the environmental impact of satellites in space, which is often treated as a global commons that can be filled with waste, much as oceans once were;
Insufficient attention being paid to the anti-sustainable business models of many digital tech initiatives and corporate practices;
A failure sufficiently to account for the environmental consumption of many new digital initiatives;
An excessive focus on carbon imprint alone; and
Using inappropriate and outdated models of environmental impact assessment.
Doing things differently
This new initiative from the UNESCO Chair in ICT4D is bringing together individuals and organisations from many different backgrounds (see below) who are committed to doing things differently. We are adopting a neutral and open stance as far as we can, but believe in making as much high-quality information available as possible emphasising both the positive and negative impacts of different digital technologies on the environment, and making this available freely for all.
Phase 1 of the work in 2021-2022 is to:
Create a clear partnership framework for engagement;
Develop a resource base featuring as many existing relevant initiatives as possible together with details of their key publications that we will make available through the UNESCO Chair in ICT4D website;
Convene a series of workshops in different parts of the world and with relevant stakeholders to identify the parameters that should be included in a holistic model (for an example of work in convening such workshops on a different topic in 2020, see Education for the most marginalised post-COVID-19); and
Identify areas where novel and further research is needed to quantify the model parameters.
Phase 2 in 2022 will:
Facilitate working groups of researchers and practitioners to develop research proposals and funding applications to relevant bodies so as to undertake the research identified as being essential for quantifying the model’s parameters; and
Initiate a series of publications and policy reports.
Phase 3 from 2023-25 will:
Co-ordinate and support research activities initiated during Phase 2; and
Culminate in a major report in 2025 providing full details of the model together with policy recommendations based upon it.
We need you to be involved
This initiative is fundamentally multi-disciplinary, cross-sectoral, international and policy oriented. It must engage academics, companies, civil society organisations, international organisations and governments if it is to have the scope to be able to address the big issues necessary for crafting an appropriate holistic system model. Amongst academics it wishes to engage those committed to working across boundaries from within many different disciplines including biologists, chemists, climatologists, computer scientists, economists, engineers, geographers, lawyers, physicists and many more. It requires the engagement of companies of all sizes, and in all business sectors, especially those with a track record of being concerned about environmental agendas – and there are many that are. Among civil society organisations, it welcomes all who have experience in environmental agendas, particularly those working on digital tech, and also those with experience at shaping effective policy campaigns at the interface between technology and the environment. It also needs to engage with international organisations and governments that are committed to working in the interests of their citizens to ensure that digital technologies really are used appropriately in the long-term interests of their citizens and planet earth.
If you are at all interested in being involved in this coalition – or even if you think there is no need for such a new initiative – we would love to hear from you. Please get in touch through our contact page (or directly by e-mail) We very much look forward to hearing from you.
Examples of existing initiatives
There are indeed many initiatives that have already sought to tackle some of these issues, but as noted above and for whatever reasons, we have not been able to identify any that are as comprehensive and holistic as we have in mind. Examples of work upon which this initiative will draw include (but are by no means restricted to):
These represent just a tiny fraction of the existing and ongoing work in the field (and apologies to all those who also feel they should be listed here – please get in touch, and we will add you!). Now is the time to bring this body of work together, to find out the true environmental impact and implications of the use of digital technologies, and what we need to do to mitigate this impact.
[This post was first published on the UNESCO Chair in ICT4D site on 21st January 2021, and is reposted here for wider interest]
The UK has the ninth worst death rate (per head of population) from COVID-19 in the world at 120 per 100,000, and this is the third worst of the 20 most affected countries (Johns Hopkins, 9 January 2021; just behind Italy and Czechia); the total number of deaths (within 28 days of a positive test) now being more than 80,000 BBC, 9 January 2021). More worryingly the number of new cases remains around 60,000 despite the recent partial lockdown, and deaths per day are currently over 1000 (UK Government, 9 January 2021). Furthermore, the number of deaths is likely to rise rapidly perhaps to around 2000 a day in a fortnight as the effects of the recent surge in infections work their way through over-stretched hospitals.
more people had responded to the crisis responsibly and wisely, caring for others as much as they did for themselves, and not trying to push the boundaries of what limited restrictions the government had put in place.
What little we know, but what we should have acted on
It is remarkable how much we still don’t know about COVID-19, despite all of the valuable research that has been done such as the creation of new vaccines and the discovery of treatments that can reduce death rates of the most seriously ill. However, we do clearly know enough for the UK government to have acted very differently over the last year. Among the most important things we do know are that:
Countries that rapidly put in place comprehensive lockdown measures and keep them in place until the number of remaining cases is very low, have not only had lower overall mortality rates, but their economies are also recovering more quickly. The UK government has consistently gone into lockdown (or restrictions) too late, eased lockdown too early, and has never therefore got on top of the coronavirus. Particularly stupidly, the lockdown in November-December 2020 was nowhere near strict enough, and was foolishly eased in the anticipation that people could see their families over Christmas.
Many countries with a history of using masks (such as China, including Hong Kng and Macau) or that have made them mandatory (such as Malaysia and Vietnam; but also many African countries) have been able effectively to limit or reduce infection rates. Much of the debate around mask use has been because of unwarranted confusion about whether masks reduce the chance of the wearer catching COVID-19, or of this actually protecting others (see my post in March on Face Masks and COVID-19). Selfish, individualist societies, where people care much more about themselves than about others and therefore don’t wear masks have generally suffered badly from COVID-19.
The fetishisation of the R-number has caused unfortunate misunderstandings and led to many more deaths than would have otherwise been the case. The UK government has seemed to place inordinate emphasis on the reproduction number (R = the average number of secondary infections produced by a single infected person), rather than on the actual numbers of people dying. R is obviously important, but there is a huge difference in impact between a higher R number when total infections are low, and a lower R number when infections are high. Many more people in the short term are going to catch the infection (and die) when thousands are already infected even with a R-number well below 1, than will catch it if only a few people are infected and the R number is 2 or 3. This is crucial, because the government should have done much more to reduce new infections in the summer to virtually zero, and should have acted much more quickly in October when numbers started to rise again (lessons should have been learnt from the experiences of Australia and New Zealand).
Too much reliance was placed on digital technologies. It is remarkable how the much-lauded NHS app (in its various incarnations) is now never mentioned by the government. Moreover, it was very expensive: in September 2020, it was estimated to have cost more than £35 million. The entire UK test and trace service has been a catalogue of disasters, but the expenditure on an app that was meant to be a silver bullet was truly misplaced, and the only people to have benefitted were the companies involved in developing it! As many people warned, digital technologies are invariably a solution in search of a problem, and the failure of previous digital initiatives should have been a clear warning to the government.
Islands have a clear potential advantage in protecting their inhabitants from COVID-19. The UK has very clear borders that are relatively easy to “protect”, unlike so many other countries in Europe, and yet it has been very tardy in introducing restrictions for those croissing its borders (either way). Island states, especially New Zealand (only 25 deaths) and Iceland (only 29 deaths) with wise governmetns have been able to ensure that infections and deaths have been kept to a minimum by imposing very strict controls. Thus New Zealand specifies unequiovocally that “All people entering New Zealand must go immediately into managed isolation or quarantine facilities. They will remain there for at least 14 days and must test negative for COVID-19 before they can go into the community”.
People respond to clear and simple messages, when they are delivered by trusted leaders. Unfortunately, the UK’s blustering leadership has prevaricated and vastly over-complicated the messages to those living in the UK during the pandemic. Things were made far worse, and trust evaporated, when Dominic Cummings did not resign following his breach of COVID restrictions in May 2020, which made many people in the country think that there was one rule for those in power, and another for everyone else. With confused (and weak) messages, alongside a growing belief that it was alright to tweek the rules a bit, it was scarcely surprising that so many people failed to act responsibly in the latter part of 2020 when COVID-19 ran out of control.
It is not the new variants that have caused the recent dramatic rise in infections; it is people’s behaviour. Put simply, if everyone focused on protecting others from catching COVID-19, then regardless of the variant the number of infections would be minimised. Yet the government and news media persist in “blaming” the new variant for the recent dramatic increase in infections, which gives completely the wrong message to people. It is high time that we were open and honest about the fact that these recent very high infection rates have been caused primarily by people’s behaviour in December; if people were not giving the infection to others, then there would be no way that these others would catch COVID-19 – regardless of how infectious the variant is. We need to realise that perhaps one-third of infections are asymptomatic, and therefore that many people who feel perfectly well are probably giving COVID-19 to others.
What we should have done; but it’s never too late to take action
Based on the above, it seems fairly clear what the government should have done, but didn’t. This is not that dissimilar to what wise voices were saying back at the start of the pandemic (see my list in April of questions tbe government still needs to answer over its failures). Neil Ferguson and his team’s modelling back in March, although decried by some not only at the time but also subsequently, does indeed seem to have been quite an accurate prediction of what was going to happen, particularly as far as a second wave was concerned and especially given the lack of knowledge at the time about the precise dyamics of COVID-19. Anyone who read that March paper should have been left in no doubt that we were going to see at least 80,000 deaths from COVID-19. Those who argued vociferously and publicly otherwise should acknowledge their mistake and share some of the responsibility for the subsequent national vaccilation about the direction in which the pandemic was heading. We are already past this level, and many, many more are sadly going to die. Each one is a tragedy for their families and those cloe to them. There are absolutely no excuses for ayone saying that they were not aware of how serious the scale of the pandemic was going to be in November 2020-March 2021.
The creation of vaccines to counter the effects of COVID-19, as well as better treatment protocols identified over the past year, provide some hope for the future. However, drawing on the above evidence, the government still needs to take further steps immediately if the UK population and economy are going to be able to reduce the scale of suffering and damage that it has already caused. The following would seem to be wise actions (in approximate order of priority):
Lead rather than react; be ahead of the pandemic. The Government must take control of the situation, and show real and decisive leadership in tackling it. All too often the Prime Minister and his cabinet have dithered, and as a result failed to protect the British people. If tighter restrictions had been in place in December, there wouldm have been many fewer than the 417,570 people tested positive in the last seven days. They should have known and planned for the scale of what has happened. They are culpable for their failure.
Much tighter restrictions should be placed on personal mobility immediately, and they should be kept in place until the number of new infections is in hundreds rather than tens of thousands. This is likely to be a minimum of six weeks and possibly much longer, regardless of the hopefully positive effects of the vaccinations. The long term economic impact of COVID-19 would be far less severe with a shorter sharper lockdown than it will be if the government continues to try to pursue its on-off policy while maintaining relatively high levels of infection.
Face masks should be made compulsory for all people both outdoors and indoorsat all times (other than in a person’s own home). This should apply to those jogging, running or cycling, as well as to those just walking. Sanitation points should be made freely available in all workplaces, shops, bars/restaurants and entertainment areas.
All people arriving in the UK should be required to show evidence of an appropriate negative COVID-19 test within 72 hours of arrival. As an island, the UK has the advantage of being able to manage its borders, and it needs to do so effectively so that additional infections are not brought into the country, especially of the inevitable new variants of COVID-19 that will emerge. It would also be a great gesture of our national care for others if we insisted on everyone leaving the UK also being tested.
The vaccination programme must be delivered effectively and efficiently. In general, the priority system seems broadly appropriate, but insufficient priority has been paid to those aged over 90, staff working for companies that provide care at home for the elderly, as well as GPs and other medical staff (all of these should be in the highest priority category) and indeed teacher. With 46,000 healthcare staff off work, an already over-stretched NHS has become even less able to manage the impending crisis. This is unacceptable carelessness on behalf of the government. Moreover, the vaccination policy and practice needs to be very much more transparent than it currently is.
A really efficient and effective test, trace and control system must be put in place once the number of new infections has reached less than 1000 a day. It is impossible for testing and tracing to work effectively with the level of infections that we now have. However, for longer term viability and success, once numbers have reduced to a manageable level (as they were for much of the summer of 2020) it is critically important that we have in place an appropriate and high quality epidemic montoring system that can prevent COVID-19 and its successor pademics from catching hold.
We should put in place now mechanisms to ensure that effective control against COVID-19 is in place for the latter part of 2021. This must ensure that sufficient vaccines are in place (preferably of the Oxford-AstraZeneca vaccine) for GP surgeries to deliver them effectively as they have done for may years with the annual influenza vaccine over the next year, and indeed in future years as well.
Each of these seven action points could have easily been put in place by the government during the summer and early autumn of 2020. It failed to do so and is therefore culpable for the excessive numbers of deaths that we are now seeing. It seems that Johnson, his advisers and senior ministers all seemed to prioritise a focus on getting an easy deal done over post-Brexit trade and relations with countries in the European Union, and therefore took its collective eye off the COVD-19 ball.
It is, though, not just the goverment’s fault. Everyone who has given COVID-19 to someone else is also partly responsible. We should not have needed the government to order us what to do. Surely, knowing what we do about COVID-19, we should all have acted reponsibly and wisely by limiting our personal contacts as much as possible. It is self-evident that we have failed to do this. We can, though, all make a difference now. Wherever we can over the next two months as many as possible of us should choose to stay at home. It only needs one contact to start a new chain of infection. Sadly, trying to circumvent the regulations that have been put in place seems to have become a national pastime; perhaps this is Dominic Cummings’ lasting legacy. Any excuse for not adhering to them seems to be acceptable to the person making it. In part this is again the government’s fault. Why on earth, for example, was “local area” not defined when the government permitted outdoor exercise within it? For one person it is somewhere within a 30 minute drive; for another it might just be within walking distance of home. However painful it is, we all need to act even more responsibly than we did in March-April. I hope Chris Whitty (the UK’s Chief Medical Officer) is right when he said on BBC Radio 4 this morning that we are at the peak of the outbreak, but I fear he is not. Given the very large number of new infections that we are still having, death rates are bound to increase further for at least two more weeks. At least Matt Hancock said yesterday that “every time you try to flex the rules that could be fatal“; such a shame that this message has not been clearer from the government before. We, the people, need to act where the government has failed. We can make a difference, but we need to care for each other more than we do for ourselves – as the brilliant staff in our NHS strive to do every moment of every day.
Several friends in recent weeks have contacted me about whether or not they should consider doing a PhD – and the first question I always ask is “why?”. How they answer that has a huge impact on how I answer their own question. However, it has made me realise that although I have written many bits and pieces about the changing character of a PhD, I have never pulled them all together into a single place. This reflection is therefore in part a summary of how I see PhDs as having changed since I completed all 642 pages of my own thesis in 1979 (having started in 1976). I hope that the insights I have gained in the 41 years since then may be of value not only to those considering doing a PhD, but also more widely to others engaged in the supervision and management of doctoral research in universities.
In summary, whilst there continue to be some brilliant students who complete outstanding theses within three years, the sad truth is that over the last 25 years the PhD has become significantly devalued and corrupted. It is time for fundamental change in PhD “production”.
I say this with enormous regret, since I see the PhD process as being of huge value and importance. It is, though, the only conclusion I can reach after having supervised 28 MPhil and PhD students since the mid-1980s (across different disciplines, and most as the only or first PhD supervisor), having examined PhDs in some 25 universities in 11 countries, having served for a decade on the Commonwealth Scholarship Commission (2004-14), and having also held various other roles relating to postgraduate research and training.
The following inter-related issues seem to be of most importance:
Not all PhDs are equal
There are huge differences in the requirements for and the quality of PhDs, not only between different countries, but also within countries, and even between departments in the same university. This is despite the use of external examiners who are meant to be arbiters of equivalence, and also despite the observation that most universities have fairly similar broad criteria for a PhD that focus on the advancement of knowledge through theoretical and empirical work. Imagine, for example, my shock when I was asked to agree to a PhD being awarded, thinking as I do that usually some 6 months of empirical field research is required for a good PhD in my field(s), only to be told that two weeks in the field was deemed to be sufficient by the university in question. The quality of expected intellectual curiosity, analytical acuity, conceptual ability, quantity of work, linguistic capability, and many other factors all vary hugely. The best PhDs remain outstanding pieces of research, but that cannot be said of all. Sadly, almost anyone with some ability can now be awarded a PhD at some university, even without resorting to some of the corrupt practices outlined further below.
Money talks and grade inflation
Grade inflation is well known at the undergraduate level (see for example Richmond, 2018; Lambert, 2019), but it has also happened at the Master’s level and even with PhDs. Unfortunately many (although again I stress not all) Master’s courses are poorly taught, and often seem to be mainly a means for universities to make as much money as possible from students willing to pay to differentiate themselves from their peers by having an additional Master’s qualification. This is a global phenomenon, but happens even in some UK universities that have a good reputation, which enables them to attract numerous higher fee-paying students from oversees. As undergraduate degrees become of lower value, it makes increasing sense for those students who can afford it to opt to get a step ahead by doing a Master’s degree – regardless of its quality. I have heard far too many stories of students paying to do a Master’s degree in a presitigious university, fully aware of the poor reputation of the teaching on the course, but still choosing to do so because of its perceived future benefit for their careers. Sadly there is a conspiracy of silence over this, because few students are willing to say publicly how poor the courses are, because that would immediately devalue them and thus their own status. Likewise no academic is likely to say that they teach a poor course, even if they rarely actually teach much of it themselves because they are too busy doing research and instead leave most of the teaching load to teaching assistants. The same is increasingly happening at the doctoral level. Universities are desperate for the much larger funds that PhD students bring – especially from overseas – and having accepted students they will do almost anything to ensure that they pass in one way or another. This can only lead to a lowering of quality.
The duration of a PhD
In the distant past, PhDs could unfortunately sometimes become a lifetime’s work, although they were never really intended to be this, and it has always been possible to complete an excellent PhD within three years. The expected duration of a PhD also varies somewhat between countries with different academic traditions. Nevertheless, from the 1980s onwards in the UK, Research Councils with their concerns to show value-for-money put increasing pressure on universities to limit the term of a PhD to a maximum of 4 years. Today, many universities insist that students must submit within four years, and failure to do so means that a degree is not awarded. In part this is driven by competition in league tables that include completion rates in their calculations, but it has also unfortunately often had the effect of reducing the quality of work submitted. In my experience, students who come from different academic traditions and more disadvantaged backgrounds often find it very difficult to adjust to starting a PhD in the UK, and I know that several of my own students in the past who completed very good PhDs would simply not have been able to do this within the 4-year limit now imposed. That would have been a shame, because they produced excellent PhDs and have gone on to do great things.
The pre-requisites for doing a PhD
It may seem strange for some to think that in the 1970s I went straight from doing an undergraduate degree to completing a PhD successfully. Now in the UK, most students must have at least one Master’s degree before starting, and even then they still have to do large amounts of postgraduate training especially in their first years of a PhD. In part this reflects the grade inflation that has so beset the sector over the last quarter of a century, with many people saying that Master’s degrees now are about the same standard that undergraduate degrees were from the “best” universities only a few years ago. However, it also reflects the increasing complexity of PhDs, and the requirement for postgraduates who wish to teach to gain relevant skills and training for their future academic career whilst doing their PhDs. Nevertheless, I still believe that a well-supervised, well-educated, outstanding undergraduate should be able to embark on a PhD without the necessity of spending time completing a Master’s qualification just for the sake of the certificate, especially when it is poorly taught and not necessarily of direct relevance to the topic of their proposed PhD.
Many other prospective students also seem to think that just because they have gained a Master’s degree somewhere (indeed anywhere in the world) that means they are undoubtedly capable of getting a PhD. This is very far from the truth. Only a few Master’s students in my experience have the inellectual curiosity and acuity successfully to complete a high quality PhD.
The challenges of part-time PhDs
I was recently asked if I thought that someone could successfully complete a PhD whilst also holding down a full-time job elsewhere. I responded quite simply “no”! It is extremely difficult if not impossible to do this and to submit a good thesis within a reasonable time period. Part-time degrees are meant to imply just that, namely that the student is also doing part-time paid work as well (not full-time), and if a full-time PhD is meant to be 3-4 years in length, then a part-time one, working >20 hours a week on it would require dedicated commitment for seven years which is a very tough order. I stand by this statement, and find it almost incredulous that some people can think of working 40-50 hours a week in paid employment and also do a PhD – especially when I feel that good PhD students should be committed to working at least 50 hours a week on their research for three entire years (with a few short holiday breaks). Yet many people still sadly do seem to think that they can complete a PhD with only a minimal amount of effort. This sadly just goes to show how the status of a PhD has fallen over the last half century!
There was definitely a time, though, in the mid-2000s when I very much championed the cause of part-time distance-based PhDs, and encouraged several people living in various parts of the world to join our ICT4D (Information and Communication Technology for Development) research community whilst working part-time in paid employment. This placed heavy burdens on them, and also on me as a supervisor, but it taught me a huge amount. None of them found it at all easy – and some found it very, very tough. However, they succeeded. Back in 2007 I therefore drafted a paper based on these experiences, although somehow never bothered to make the small number of revisions requested by a journal editor for it to be published. Having re-read it recently, I still think it has something of interest to say to those who are thinking of embarking on such a mode of PhD research and am now making it available here for anyone who might be interested – although it is undoubtedly somewhat dated.
Whose PhD actually is it?
I, perhaps too simplistically, still believe that in most cases a PhD should be the work of a single person, who actually does all, or certainly the vast majority, of it, from the research, fieldwork and analysis, to the writing up and presentation. To be sure things are sometimes more complex in laboratory sciences, or on expeditions when team work is essential, but even then the actual PhD should remain largely the work of one person – supported and guided by a supervisor (or a supervisory team) – and the precise amount contributed by others clearly stated. Not so long ago, supervisors worked carefully with their students, regularly going through manuscripts and helping them improve the quality of their academic writing. This is especially important when working with students from different cultures and academic traditions, and whose first language may not be the language in which the PhD has to be written. In the past, I often found myself spending a whole day going through a 10,000 word chapter for a student, and suggesting revisions to the text that could improve it. Increasingly, though, academics are discouraged from assisting students with developing these academic linguistic skills, because they don’t have the time to do it, because they are told that this is specialist work for support services to do, or because students who are accepted to do a PhD should already have these skills; sometimes students even object to supervisors commenting in detail about such things as sentence structure and written style, even though such comments are designed to help them develop these relevant skills!
A very specific, but increasingly common, issue arises when students send their draft work to an external “proof reader” before submitting it (there are many examples of companies offering this service, such as Scribendi, ProofReading, or Oxbridge Proofreading). It is relatively easy for a supervior to see when this happens, because there appears to be a dramatic, overnight, improvement in the quality of a student’s written work. It is, though, exceedingly difficult to know how much of a manuscrpt is actually written by the student, and how much by the “proof reader”. Given that having a PhD in a given language is meant to be indicative of the academic abilities of a person in that language, it seems to me that any substantial revision by someone other than a supervisor suggesting revisions to a draft is unacceptable.
At a further extreme, there are very clear examples of students getting a “friend” to do some of the work for them, such as doing the statistical calculations, drafting figures, preparing the templates, or even rewriting parts of it. If a thesis is meant to be a student’s own work, then these practices are likewise not acceptable. I remember drawing more than 50 figures with stencils and a Rotring pen for my own thesis, each of which took at least a day to complete – and that was without all of the computer generated graphs as well (which took some time to do back in the 1970s)!
Corruption within the system.
There are indeed many good supervisors, PhD students and management systems to support them across the world, but it also needs to be recognised that there are also many poor systems and outright corruption that must be rooted out, not least in my own country, the UK. Some dubious practices have already been suggested above, but these pale into significance when compared with the following examples.
Poor supervision and problematic examination boards
Sadly, there remain too many examples of poor doctoral supervision, although in my experience almost every academic I know well is hugely committed to this role, and sees it as a central and enjoyable part of their work. It is after all the main means through which new blood is brought into the system! Nevertheless, I am personally aware that the following practices still occur, and I am sure there are many others as well:
One of the main complaints is that some supervisors only rarely see their students. This has always been the case, but I know of cases where students have still had to complete their theses with only a handful of supervisory meetings over three years, and have been discouraged from making formal complaints about this because their supervisor is a “good academic researcher” and colleague in a department. Most students in such situations are also under severe pressure, not least because supervisors are often required as referees in their subsequent job applications, and in disciplines where supervisors are expected to be named authors on papers to make a complaint would severely handicap the submission of future publications from their theses.
Other supervisors have been known to use their students’ work primarily to build their own career and without giving them the credit for their original research [Partly for this reason, I have never asked to be an author on my students’ papers, and only ever write joint papers with them when I do a substantial amount of the actual research].
Some supervisors have tried to prevent their students from submitting their theses – occasionally right at the last minute – even when they themselves haven’t made the time to read and comment on final drafts. [It should always be up to the student to decide when a thesis is submitted].
Others are willing to take on large numbers of doctoral students for the prestige and income they generate, but know they don’t have time to supervise them all properly; the weakest often fail to swim and eventually drop out.
When it comes to the examination, it is sadly often the case that supervisors tend to try to find “softer” examiners for “weaker” candidates.
As an external examiner, I have also encountered very strong (and indeed quite upsetting) pressure from internal committees to change my mind; at least I won’t be asked to be an external again for such universities! [Increasingly, I have found myself warning universities that I will make judgements according to the standards that I consider appropriate, and when I suspect that a candidate may be weak I do not accept the invitation to be an external examiner. I have also been known to give my honest opinion of a piece of work, whilst adding the caveat that I don’t know the normal standard acceptable in an institution/country, and I would of course be willing to discuss the matter further].
I have recently been made aware of the term “Sexually Transmitted Degrees”, which is apparently quite common in certain parts of the world, particularly for undergraduates, but also occasionally for postgraduate degrees as well. I have to admit to being shocked that I hadn’t known of this term until the last few years – perhaps this shows just how naïve I am! It is, though, an issue that must be addressed – and the complexities involved mean that this is not necessarily always as easy as might at first sight be thought.
Fortunately, systems are being put in place by many universities to reduce such practices, but they do still exist, and tighter mechanisms need to be implemented to reduce poor supervisory and examination practices.
Much has been said and written before about problems with the supervisory process, but a few doctoral students are also themselves engaged in clearly corrupt practices. The extent of such corruption globally is unknown (although see Osipian, 2012; Denisova-Schmidt, 2018), but some inappropriate practices with which I am familiar include:
Paying someone to write part or all of a thesis. There is a fine line between this and the increasingly common use of “copy editors” noted above, but the widespread and sophisticated use by universities of plagiarism detecting software (such as Turnitin) has meant that those students who don’t have the time (or ability) to write their theses are now turning to professional dissertation and thesis writing services (see for example, Study Aid Essays, British Hub, UK Top Consultant, WritePaperForMe). One of these brazenly advertises its services as follows:
For 9 Years … has supported over 3,000 undergraduate, postgraduate & doctorate students with original custom essays, proposals, reports, literature reviews, full dissertations and statistical analysis in a wide range of subject areas
Arranging for a friend who will be supportive to serve as the external examiner. This should be precluded by the systems a university has in place for the appointment of examiners, but I even know of a case where it appears there was collusion between the student and the supervisor to ensure that a favourable friendly examiner was appointed.
Unfounded malicious accusations by students against their supervisors with the intent of ensuring that they are awarded their doctorates. Although these cases are rare, it is easy for a student to blame a supervisor for their own failings. Despite the apparent power relationships in favour of supervisors, some universities are so concerned about the “bad press” that can follow in such circumstances that they tend to find ways through which the student can succeed, even when the consequent standard is low.
The giving of lavish gifts by a student to their supervisor. This can be hugely complex, especially because gift giving has varying meanings and significance in different cultures. Nevertheless, it can be very problematic for a supervisor to accept expensive gifts from a postgraduate student before the award of the degree, even when there is no devious intent behind it [Gifts of appreciation after the award of a degree do, though, still seem appropriate should a student wish to give them].
I know several examples where doctoral students have not done the empirical field research themselves, but have instead paid for assistants to do it on their behalf, and do not acknowledge or admit such “help” in the text of a thesis. Given that I expect a thesis to be “all the student’s work” (see above), I cannot condone this practice, but I am aware that it seems to be acceptable by some universities in certain circumstances. Translation also represents a challenge, and I confess that in the past I have usually insisted that students learnt a language of the country in which they were doing their research.
I have not myself encountered cases where thesis data have been fraudulently “created”, but notorious examples exist, and the scale of this problem is undoubtedly greater than many people care to admit, not only in postgraduate research but also more widely in academia (see Hopf, Mehta and Matlin, 2019).
Many of these dimensions of corruption are extremely difficult to prove, but universities should recognise that they exist, and should do more to prevent them. In a nutshell, as less-able people seek to gain doctorates, the likelihood of fraud and corruption undoubtedly increases.
This is not only morally wrong, but it is also unfair on those many students who work extremely hard to achieve a PhD, it devalues the worth of PhDs in general, and it contributes yet further to a lowering of the overall quality of academic research.
A positive conclusion
Despite the above comments, I like to believe that most supervisors and doctoral students work collaboratively and well together, and that many truly original and excellent theses continue to be crafted across the world. Working with able postgraduates has certainly been one of the real joys of my academic career although there is no doubt that supervisory relationships are among the most fraught and challenging of any in academia. It is truly a blessing to see how the careers of most of those students who I have had the privilege to supervise have flourished and blossomed, and it is a joy to keep in touch with so many of them.
In the 2000s, recognising the need for me to give greater clarity about what was involved in doing a PhD in our ICT4D Collective, and to help students undertstand my own expectations of the superviosry process, I produced two documents. Having looked at them again, it is evident that they need some updating (they were last updated in 2007 and 2008), but I still stand by almost everything contained within them, and so am posting them here as a guide for potential students (and interested others) to what I try to practise as far as supervision is concerned:
The first of these emphasises that a good PhD should not be a life-work – that will come later! Instead, I have found that it is often easier to see a PhD primarily as something that provides evidence of the achievement of a key set of seven academic skills (slightly adapted below):
Being thoroughly conversant with the key intellectual debates in a particular subject area, and using this to provide a conceptual framework for the thesis
Being able to identify important novel issues from these that will form the focus of their research, and developing these into a clear aim
Being able to design a relevant methodology to undertake rigorous empirical work that will add to our collective knowledge in that research field
Then using this to undertake research and gain empirical evidence in a particular place or places
Analysing the results of that field research in the context of the theoretical or conceptual framework
Writing this up clearly and effectively in an interesting way
Drawing relevant conclusions that move knowledge forward, and (for the field of ICT4D) make new practical recommendations in the interests of the poorest and most marginalised.
Over the years, I have come to realise that students have varying strengths and weaknesses in achieving each of these. Many have difficulties in engaging theoretically and developing an approprioate conceptual framework, whilst the majority find the empirical field research most enjoyable. Nevertheless, a good prior degree should enable the first four of these elements to be done relatively easily. Unfortunately, some students can only get this far, and find it impossible satisfactorily to analyse the data, which results in an overly descriptive and thus problematic thesis.
I do hope that these reflections may be of help and interest to those embarking on a research degree – although I have very deliberately not answered the question that I posed at the beginning! That’s up to you, but I hope that what I have written will help you answer it!
Having recently written a post reflecting on aspects of power relationships and control through digital systems, I just thought that it might be helpful to share the list below of those digital systems and social media that I generally use so that others can know how to interact with me should they wish:
Digital environments that I use for public interaction
e-mails – multiple accounts, some integrated through Outlook/Office 365 (mainly formal work-related – contact via Royal Holloway, University of London) and others (mainly private) through Apple Mail. This remains my main mode of digital communication, largely because my e-mail account is one central place where people can send me material they want me to read!
WordPress (since 2008, although I began blogging earlier in 2007) – mainly for work-related issues, but also for personal views and opinions – for links to my various blogs see http://unwins.info. Interestingly, I am now blogging much less than I used to with views in recent years being half of the 30,349 I had in 2011.
Zoom – as have so many people during COVID-19, I have come to use Zoom quite extensively for large group meetings as well as personal ones, and it has largely replaced my former use of Skype. I particularly like its ease of use, and the way that it enables me to deliver live presentations through using its background feature.
Elgg (open source social networking software) – but sadly this is little used by others, and so I only tend to use it for small work-related groups (and in part as a replacement for Moodle).
Slack (sadly bought by Salesforce in December 2020) as a group communication platform – no longer sure how long I will continue to use it.
Moodle – the environment we’ve used for various courses at Royal Holloway, University of London and beyond; also where we used to make our ICT4D course materials freely available.
Twitter – joined in January 2009 – mainly for sharing information about our research and practice (several different accounts including @unescoict4d and @TEQtogether for research, and @timunwin for personal) – am using it much less than I used to, because the character limit tends to lead to short soundbites that oversimlify issues, and it has become a place dominated by “politically-correct” resonating rhetoric.
Facebook – joined in November 2006 – mainly used to share things that interest me, but also use for work-related groups (such as the ICT4D Group created in April 2007 and now with around 5.5K members, and the TEQtogether Group) – I’m again using it much less than I used to, especially since the dreadful new desktop/laptop version was introduced.
Microsoft Teams and assorted other Microsoft apps such as Skype for Business and Yammer (social networking; bought by Microsoft in 2012) – the basic environment adopted by Royal Holloway, University of London some time ago (and so I really have to use it to communicate with colleagues there), but also widely used in other enterprises. I find it rather clunky, in much the same way that I also never liked Microsoft’s Sharepoint (launched back in 2001); I still try to use these as little as possible.
I’m beginning to use Miro (an online whiteboard) for team work and collaboration.
Environments that I will not use
So, definitely don’t ask me to collaborate with you using the following:
Any Google environment (including Cloud, Drive, Chrome) – and especially if I have to sign in with a Google account (I refuse to have one). Exceptionally, I will occasionally use a Google environment providing I don’t have to sign in – but I would prefer to be sent documents in a way that is easier for me, such as by e-mail.
WeChat – I may well have to use this one day to be able to communicate in China and with Chinese friends since it is becoming so ubiquitous and essential there, but for the moment I don’t want too many governments having easy access to too much information about me.
Sina Weibo – similar reservations to the above, but I am also getting too old to learn to microblog in Chinese.
Environments that I use for private communication
The following are some of the environments that I use for private communication, and will not use for professional purposes (so please don’t ask me, for example, to join a WhatsApp group):
WhatsApp – although I almost stopped using it when it was purchased by Facebook. I only use it with family and friends, and most definitely will not use it for public or professional groups (so there is no point in asking me to join a group).
Signal – I use this privately because of its security, usability, and open source build – and it’s also free.
Instagram – for posting images of things that interest me (I may well soon start to use it to share information about our research and practice, since Instagram is increasingly being used effectively by businesses for this).
I have written many times before about the changing balances of power enforced by most digital technologies, but three recent incidents have focused my mind yet again on the shifting relationships of control brought about by the use of such technologies.
Tales from a worker…
I was invited to be a speaker at an online event using a particular technology with which I was not very familiar (Streamyard). I tried both of the browsers that I usually use (Firefox and Safari), and although the former enabled me to use some of Streamyard’s functionality, I could not do everything that I had wanted to use (and usually do) when giving an online presentation. Streamyard recommends Chrome, but I limit my use of Google products as much as possible, and refused to download it just so I could give one short presentation. I fear that the organisers did not appreciate my obduracy, and were surprised that I kept receiving error messages when trying to use some of Streamyard’s functionality.
I also belong to a civil society organisation that has recently gone over to using a particular app for managing the activities of volunteers. Previously, the administrator used to circulate details of rotas directly to the e-mail boxes of volunteers, letting us know when we were required and also providing reminders nearer the time. We have just received a message saying that the new automated system has been set up, and I have to check “my rotas” periodically to see what I am scheduled to do, and if necessary arrange swaps with others. Now, that obviously makes life easier for the administrator, but adds greatly to my time load because I have to log on to the system, negotiate its far from perfect functionality, see what I am down to do, and then note this in my diary. This is many more clicks than just opening an e-mail sent to me! The centre benefits; the volunteers have more work to do!
I was likewise doing some work for an organisation that uses Microsoft Teams, and when I requested a document, rather than it being sent to me I had to got into Teams, find where it was located (often in a crazily obscure sub-folder), download it onto my device (which often took some considerable time), and only then was I able to open the file and read it. If only someone could simply have sent it to me, or even just sent me an accurate link so I could open it online.
All of these examples illustrate ways through which digital technologies are being used to shift the balance of work away from administrators/managers at the “centre” and towards the employees/volunteers at the periphery, whilst concentrating the actual power ever more at the centre. My hunch is that the net wastage of time within such systems has gone up, that inefficiency has increased, and that the extraction of labour power from human employees has likewise increased. Digital technologies rather than improving the efficiency of systems, have become a means through which work/labour has not only increased but has also become very much more dehumanised and exploited by those at the “centre”.
Changing the balance of power
There are many ways through which such dehumanisation and exploitation take place, but the following are some of the most prevalent:
“Papers” for meetings: a historical legacy
I am old enough to remember the days when staff were sent papers (even in manilla envelopes) sufficiently far in advance before a meeting so as to be able to read and annotate them by hand. As an employee I received them, but it was the management/administration team who actually printed and distributed them. From the early- to mid-1990s, with the introduction of MIME, attachments became possible, and very swiftly, papers for meetings (and everything else as well) started to be sent by e-mail. In the early days, employees were often even required to print them off themselves and bring them to the physical meeting (a ridiculous multiplication of effort and expense). The balance of direction had shifted. No longer could the employee just open the package; now they had to save, open and print the files themselves – and that was in the days before you could bring your laptop to a meeting. Today, as digital systems have become ever more complicated and sophisticated, all the administrators have to do is upload documents once onto a centralised digital administration or management system, and then all relevant employees or users each have to log on, find the file, download it (be it on Basecamp, Trello, Asana, Teams, Slack, SAP, Google Drive, DropBox or wherever), and then read it. All of these stages take additional time for employees, and many are problematic and frustrating to use. While such systems clearly benefit the central generators of content, the total amount of time spent by all of the users who need to access it has increased.
Multiple overlapping systems: who decides which system to use?
For people only working in a single organisation and trained to use a single main digital system or environment, the time wasted in accessing digital content is bad enough. For those working across organisations, each with different systems, it becomes a whole lot worse. Not only are users encouraged to leave all of their systems on all the time so that they know what is happening or required immediately, but they are frequently also expected to reply instantaneously. This is neither possible nor sensible. Moreover, leaving your systems on means that others can see if you are there and contactable, which is not always helpful!
Extending the working day
This is perhaps the most obvious and yet insidious “benefit” of digital technologies. I’m old enough to remember the notion of a working day being “9 to 5” – although confess that I have always tended to spend longer “in the office” than that! However, even before COVID-19 helped to create a 24 hour working day, digital technologies have been used by employers dramatically to extend the working day, whilst at the same time claiming it is in the employees’ interests. This is particularly seen, for example, in the expectation by many managers that employees are contactable all hours of the day and night by e-mail, or even worse now through invasive social media messages. Long gone are the days when London commuters locked their safes, finished the day at 5 pm and got on over-crowded smoke-filled trains for the long commute to the suburbs. The commute has often now become the time to respond to digital messages, and once home people are then also frequently expected to do online training in the comfort of their homes. Travel to work, and the sanctuary of the home – all times previously free from employment-related labour – have now been incorporated into normal work expectations.
The all-seeing eye
More concerning than the extension of the working day, though, are the many ways through which employers now monitor every aspect of an employee’s work – reflecting both a collapse in trust, and an intent yet further to maximise extraction of the labour power of employees. This goes far beyond the use of digital fingerprints or retinal scans that check when an employee enters an employer’s premises, to the spatial monitoring of their personal digital devices and their every use of the employer’s digital management system; some are already microchipping their employees, in the name of making life easier for them (see for example, Metz, 2018; Schwartz, 2019).
Wasting time in digital meetings – just because we can meet, doesn’t mean we should waste so much time online in them!
Most face-to-face management meetings are a waste of time for the majority of people attending them. Invariably they are held for the sake of holding them, for the performance, and as a way of “management” controlloing “staff”. The proliferation of online meetings during COVID-19 has dramatically exacerbated this problem, and the difficulty of picking up the sensuous physical indicators between people has actually also often caused damaging misunderstandings that would have been less likely during a physical meeting. Just because it is possible for many people to participate in online meetings at all hours of the day and night does not actully mean that this is a valuable use of time. Participating in online meetings is rarely productive work!
Digitally enabled co-production of content is not always a good use of time
The potential for many people to work together in creating a single document can be greatly facilitated by the use of digital authoring tools. However, this crafting process can actually take much longer for people to interact with, and the net outcome is not necessarily any better than traditional editorial commentary systems. Working with different colleagues in various ways to craft texts through COVID-19 has been fascinating, and has reignited concerns I have previously had that most such usage of digital technologies actually increases the total time spent on “writing” without necessarily producing a better outcome. Furthermore, so called more “democratic” digital systems actually usually still contain subtle power structures. The first person to comment on a shared document, for example, exerts great influence on the remaining respondents. In contrast, where colleagues each respond to a central editor without seeing the comments of other team members, this “first respondent” bias is not present.
Why on earth would you want to attend a Zoom webinar where you aren’t even allowed to speak?
One of the greatest recent forms of control – and time-wasting – has been the proliferation of Zoom webinars, where an audience is invited to a view-only platform without being able to see each other or participate interactively beyond a limited chat facility. What a power relationship! Almost every company, international organisation (especially UN agencies) and civil society organisation I know has got on the bandwagon of inviting people to join Zoom webinars. If I were to accept all of the invitations I have received, my diary would be full mutliple times over every hour of the day and night! But most of these are dreadfully presented, and a complete waste of time, quite simply because it is much quicker to read something than it is to listen to someone talking to the background of a shared overcrowded and poorly designed slide deck! This is not to suggest that we should not try to use digital technologies to interact at a distance, but we should try to do so in as open and democratic way as possible (this is at least what we tried to do successfully with the ICT4D2020 Non-Conference, as well as with the launch of the Education for the Most Marginalised report #emmpostcovid19, or which more than 350 people were registered).
These are but a few of the countless ways through which digital technologies are being used to impose new systems of control, and to shift that balance of work and time away from the “centre” (or employer/manager) to the “periphery” (worker, employee, volunteer). In the academic part of my like, I encounter this increasing everyday exploitation in so many ways:
through the increased amount of time that online marking takes;
through the time-consuming online grant application forms that need to be completed,
in having to submit ghastly unintelligible spreadsheets online to report on grant expenditure;
through being required to use the frequently dreadful journal online processes when asked to review papers for them;
in being required to process and provide comments on job applications online;
in reviewing online fellowship and grant applications…
The list could go on, but my essential points are that many of us who experienced pre-online life find the new systems much more time consuming than they were previously, and most of them represent increasingly centralised control of professional working life. In the name of efficiency and democracy, many digital “solutions” actually create sytstems that are much less efficient and much more centralised and controlling than they were previously.
This is also a call for change; a call for the wise to say enough is enough. It is a call for those designing these systems to make them serve the interests of the workers rather than the masters, a call for the overthrow of the tyrannical powers of the digital barons, and a challenge to those who seek digitally to enslave the masses. We, the people, have the power in our hands to reject such control – all we need to do is to determine our own digital boundaries (for a summary of mine, read here), and make those who wish to control us instead to serve us through them. Above all, we need to reclaim our own physical and sensuous experience of reality, unmediated by the powers of digital control.
This has been a crazy week of over-dosing on Zoom for those attending the online IGF 2020 (made worse by too many slide-decks). How I wish I was physically back with real friends in real Poland, having real conversations and drinking real Polish beer and cherry vodka!
However, it was really great to participate in the GIZ-convened session WS #255 on Digital (in)accessability and universal design this morning (my time!). Huge thanks are due to Paul Horsters (from GIZ) who brought us all together, and to Edith Kimani (Deutsche Welle) who was an excellent moderator, as well as those providing sign language and captioning. It was also excellent to have such a diverse range of other speakers (none of whom used the dreaded slide-decks!): Bernd Schramm (GIZ), Irene Mbari-Kirika (inABLE), Bernard Chiira (Innovate Now), Claire Sibthorpe (GSMA) and Wairagala Wakabi (CIPESA).
As part of the workshop we wanted to produce an output that others could use in their own work, and so have crafted a mind-map in various formats that we hope will be of use to everyone committed to working with persons with disabilities to ensure universal digital inclusion. A WordArt summary of everything in the mind-maps is also shown below:
The mind map that includes summaries of all the individual presetnations as well as responses to the questions asked during the workshop is available below in various formats:
Back in July, and indeed long before then, many of us were warning that we had to spend the summer working incredibly hard to ensure that the UK would be able to be resilient in the face of the likely rise in COVID-19 infections. It seems instead that the government took its eye of the ball, hoped that COVID-19 would somehow go away, and instead concentrated on trying to impose its will on the European Union over the Brexit trade negotiations.
In September (the 18th), when I was feeling particularly disgruntled with the incompetence and stupidity of our government, I therefore posted on Facebook a list of some of the things that I feared might happen over the next year under the heading “Now is the winter of our discontent… (Shakespeare, Richard III). I wonder how many of these will coincide in the UK over the next few months”. Having been for a long autumnal walk today (the picture above), the day after our Prime Minister announced a new 4-week lockdown from 5th November, I just thought that I would also post them here as a record of what happens over the next few months. I so hope that I am wrong, but I will update the content periodically to see what happens: green means that fortunately my fears were ill-founded; red indicates that sadly I was correct; and pink indicates that there is some evidence that we are heading this way! I should stress that these are not predictions, but instead imaginations of what a “perfect-storm” would look like. Already, our government has indebted future generations for years to come. There is no doubt that things will get very much worse before there is even a glimmer of hope that they will improve.
Dramatic increase in serious COVID-19 cases leading to overwhelming pressure on hospitals;
Crisis over Brexit negotiations resulting in serious trade disruptions and collapse in value of the pound (not least on 20th December, in large part because of coincidence of rapid surge in new COVID-19 strain with stalled Brexit negotiations, Port of Dover announces ferry terminal closed to traffic leaving UK; massive lorry queues at Dover as borders closed; however agreement on a Brexit deal on 24th December slowly improved matters; but subsequently border queues and bureaucratic changes led to further problems in January 2021 as evidenced with BBC report on M&S, shirt exports in The Times, and Michael Gove the Cabinet Office Minister stating on 8 January 2021 that there will be significant disruption at borders)
Influenza pandemic (partly because of insufficient vaccines available) coinciding with COVID-19 pandemic causing additional crisis for NHS;
Food shortages (resulting from trade disruptions) leading to rising thefts from supermarkets and shops; (BBC News: trade disruptions at Felixstowe, 14th November 2020; BBC News: Brexit increasing food supply chain costs)
Serious flooding in much of lowland England as a result of heavy rains in October and November (BBC reports heavy rainfall and risk of flooding, 3rd October; BBC also reports homes evacuated in South West after downpours and flooding on 19th December; and serious flooding in Bedfordshire and elsewhere reported on Christmas Day – BBC)
Increasing power outages resulting from gas shortages, lack of sunshine for solar power, and storm damage;
Standstill caused by heavy early snowfalls in late December (Glad that this did not happen)
Mass graves dug in major cities because crematoria and mortuaries are overcome by demand (not yet, but overflow mortuaries were being created in early 2021 – BBC News: emergency mortuary in a Surrey woodland, 11th January 2021)
Very significant riots as more and more people realise that Brexit was a huge mistake;
Her Majesty Queen Elizabeth II dies of COVID-19 complications, and mass demonstrations against Prince Charles lead to his resignation and the declaration of a Republic;
Northern Ireland joins a united Eire;
Scotland and Wales declare unilateral independence from the UK, and form a wide-ranging mutual interest pact…
Who will be the sun of York to turn this winter into glorious summer?
Having written quite extensively about the dire responses of the British government to the crises surrounding COVID-19 earlier in the year, I have held back from further criticism and writing about this for almost two months. It seems extraordinary, though, how few lessons seem to have been learnt in Europe from our experiences with COVID-19 so far, and how so many people seem to be surprised at its recent resurgence. As many of us have said for a long time, this was only to be expected, and is a direct result of the the behaviour both of individuals and also of governments. Above all, it seems to to reflect the selfish individualism, rather than communal responsibility, that has come to dominate many societies in Europe and North America in the 21st century.
The lack of research as to exactly why different countries have such varying mortality rates is also shocking (see my The influence of environmental factors on COVID-19 written in May). As a global community, very much more attention should have been given to this, so that we could by now have a better understanding of what has worked, and what has failed. Answers to these questions would enable governments now to be implementing better policies across the world to mitigate the COVID-19 related deaths that are becoming ever more numerous.
The chart below indicates the very differing numbers of deaths from COVID-19 per 100,000 population in the countries of the world that have had more than 5,000 deaths as of 21st September 2020 (data from https://coronavirus.thebaselab.com). While all such data are notoriously problematic, reported deaths from COVID-19 are more reliable than are data for case numbers (see my Data and the scandal of the UK’s COVID-19 survival rate written in April). Deaths above the usual average (excess mortality) are probably an even better measure, but are unfortunately much more difficult to obtain at a global scale. Furthermore, it must be emphasised that this sample does not include all those countries that have had far fewer deaths, and that much more research is needed in explaining why it is indeed these 25 countries that have had the most deaths in the first place.
This chart raises many unanswered questions, but does at least show two key things:
Some countries have “performed” very much “better” and others much “worse” than average. India, Indonesia, Germany and Pakistan appear to have performed significantly better than Peru and Belgium. Why is it, for example, that Peru has 30 times more deaths per 100,000 than does Pakistan? Yet it is extremely difficult to see what either of these groups of countries might have internally in common.
There nevertheless seems to be a broad group of very different countries including Sweden, Spain, the UK, Brazil, Chile, Ecuador and the USA that have so far had between 50 and 70 deaths per 100,000. Again, these countries are very diverse, be it in terms of size, demographic structure, political views, or government policies towards COVID-19, although most seem to be fairly right wing and individualistic. Interestingly Sweden with its much more relaxed policy towards social restrictions during COVID-19 appears to have done neither better nor worse than other countries in this group.
The challenge, of course, is to try to understand or explain these patterns but sadly too little research has been done on this in a systematic way to be able to draw any sound conclusions. Put simply, we do not yet really know why countries have had such diverse fortunes. Nevertheless, it is possible to begin to draw some tentative conclusions:
Much has been made of the environmental factors possibly influencing the spread of COVID-19, but very little actual process-based research has satisfactorily shown how viable SARS-CoV-2 actually is under a wide range of environmental conditions (see my The influence of environmental factors on Covid-19: towards a research agenda from May). The data above serves as a cautionary warning: countries with similar broad environments tend to have very differing COVID-19 trajectories. Why, for example, are Latin American countries suffering much worse than those of Africa and Asia, although they share many environmental characteritsics in common?
A second challenging conclusion is that the actual policies followed by governments may not be that significant in influencing the spread of COVID-19. It is thus striking that Sweden, which has followed very different policies from its neighbours, has not done significantly better or worse than them or indeed other countries such as the UK and the USA, which are widely seen to have failed in dealing with COVID-19.
In searching for explanations, it is also pertinent to see whether these rates could in any way be related to varying levels of inequality. However, using the Gini coefficient as a measure of inequality there seems to be no significant relationship with mortality rates (R2 = 0.027).
Religious beliefs and practices, likewise, do not seem to be particularly good at explaining these differenceces, although nominally Christian (or atheist) countries do fill the top 15 places in terms of mortality rates, before Iran in 16th place. Other countries with large percentages of Muslims, including Turkey, Egypt, Indonesia and Pakistan all have less than 10 deaths per 100,000. The difference between India and Pakistan (neighbours in South Asia) is particularly interesting, in that India (predominatly Hindu) has a mortality rate more than double that of Pakistan. No satisfactory explanation for this has yet been identified.
There has also been some speculation that individualistic societies, where people care more about themselves than they do about being responsible for their neighbours, are having higher mortality rates than do more communal societies, and in this respect the contrasts between the USA and China are indeed very marked. It is extremelt difficult to measure individualism but correlations between the Geert Hofstede Individualism (IDV) Index and mortality rates do not have a strong correlation (R2 = 0.048).
No single explanation would simply account for all of these differences. An important conclusion must therefore be that there is indeed not a single solution (apart from a vaccine or other medical interventions) that is likely to prevent dramatic increases in the prevalance of COVID-19 in these countries, and that many more deaths are therefore certain over the next six months. As individuals, we all know what can make a difference: avoid large groups, wear masks, stay outside as much as possible, wash our hands regularly, and above all act responsibility with respect to others. At all times we mut act as if we have COVID-19, and imagine how we would feel if we were the other people with whom we were interacting, and they knew that we had COVID-19. If there is any solution to COVID-19, it must be that we act responsibly rather than selfishly (see my A differentiated, responsibilities-based approach to living with the Covid-19 pandemic written in June).
The full list of countries with >5000 deaths by 21st September and therefore included in this analysis is (in descending order of deaths per 100,000) : Peru, Belgium, Spain, Brazil, Chile, Ecuador, USA, UK, Italy, Sweden, Mexico, France, Colombia, Netherlands, Argentina, Iran, South Africa, Canada, Russia, Germany, Turkey, India, Egypt, Indonesia, Pakistan
I have been exploring the ways through which a sample of countries (mainly the largest ones, European countries, and a smattering of others in Africa, Asia and Latin America) have fared through the COVID-19 pandemic, regularly plotting various correlations beteween different variables. The challenge, of course, is that the data are hugely unreliable, and reflect different definitions, different cultural practices, different abilities to test, and different political interests (amongst many other factors). For long, I have argued that data on deaths (including those over and above the norm) are more reliable than those on reported cases, and also that we should not use absolute figures, but rather ratios or percentages (such as deaths per 1 million people).
However, exploring ideas about risk today, I have discovered some fascinating insights. The Table below indicates the number of new cases reported per 100,000 total population on 24th July in the sample of countries I have been examining (based on data from thebaselab):In essence, let’s assume that if you are prepared to go out and about (perhaps even without a mask) in a country that had 1 new reported case per 100,000 yesterday, then you would feel happy with doing so in any country scoring below 1 in the Table. If you were happy to double the risk, this would include all countries below 2, and so on. Put another way, the risk in Brazil is about 41 times that in Germany; that in the USA is 21 times as high as in the UK. This emphasises once again the critical importance of not using absolute numbers, but rather focusing on ratios. Although I have written extensively about the appalling way in which the UK government has handled COVID-19, and I remain certain that Johnson and Cummings, as well as others close to them, are responsible for many more deaths than might reasonably have been expected, this figure for the UK is actually quite reassuring.
The challenge, of course, is that it is very difficult to interpret these figures because of the uncertainties associated with reported cases – and the data are only for a single day. Many more people will have COVID-19 without it being reported, and it seems clear that asymptomatic carriers can also infect people. Nevertheless, for those going on holiday in Europe this summer, it would appear that the risk of going to Italy is about one-twelfth that of going to Spain at the moment.
What risk level are you going to be happy with? And, wherever you go it is surely wise to wear a mask to protect others in case you are an asymptomatic carrier. Stay well!