I recently spent three hours completing an online financial expenses claim form for the finance department of our university relating to an overseas research trip. There were only 20 items of expenditure to be entered. However, each of the receipts had to be copied, reduced in size to suit the requirements of the software and uploaded into the system, along with separate details of the credit card payments for them. These had to be matched with numbered explanatory entries on another page of the online form, none of which could be automatically generated, and each of which required separate keyboard entry. On average, it therefore took me nine minutes per entry. I’m sure that anyone who has been forced to use Unit4’s Agresso software will know just what a cumbersome and time-consuming piece of software this is. Of course, it purports to reduce the time spent by staff in the accounts department, thus reducing the university’s expenditure on staffing, but this is at a significant cost in terms of the amount of time that I, as a user, have to spend. In the past, using hard copy receipts and forms, this task would have taken me much less than an hour to complete. My time is precious, and this represents a significant waste of time and money for myself and the university, over and above the costs that the university has incurred in purchasing the software and training staff in its use.
This is but one example of the ways in which digital tech is being designed and used to shift the expenditure of labour from the top downwards, and from the centre to the periphery (see my 2020 post on this for more examples). End users now have to do the work that those at the centre of networks (such as organisations, institutions, or governments) previously had to do; end users produce and upload the data that the centre formerly collected and processed. This is one of the main reasons why workers and citizens are now forced to spend considerably longer time and more effort completing mundane tasks, for the benefit of more powerful centres (and people) who give them no choice, and force them to conform to the digital systems that they control.
Examples of everyday digital oppression
There are many examples of this tendency, but the following currently seem to be most problematic (over and above the ever present challenge of spam, hacking and online fraud; I do not, though, address issues such as digital violence and sexual harassment here because I have written about them elsewhere, and want in this piece to focus instead on the everyday, normal processes through which structural imbalances are designed and enforced in the everyday use of digital tech):
The (ab)use of e-mails, especially when disseminated by the centre to groups of people. It is easy to send e-mails from the centre to many people at the periphery or down the hierarchy, but the total burden of time and effort for all the recipients can be enormous. This is particularly with respect to copy correspondence, which adds considerably to the burden (see my e-mail reflections written in 2010 but still valid!). It is increasingly difficult for many people to do any constructive work, because they are inundated with e-mails.
Being forced to download attachments and print them off for meetings. Some people “at the centre” still require those attending meetings to print off hard copies of documents before attending. This is quite ridiculous, since it vastly increases the total amount of time and effort involved. If hard copy materials are required, these should always be produced and distributed by the centre and not the end user.
Extending the working day through access to and use of digital tech. The above two observations are examples of the general principle that digital tech has been used very widely to extend the working day, without paying staff for this increase. The idea that e-mails can be answered at home after “work”, or personal training done in “spare time”, are but two ways through which this additional expropriation of surplus value is achieved.
Companies requiring users to complete online forms and upload information. This widespread practice is one of the most common ways through which companies reduce their own labour costs and increase the burden on those for whom they are intended to be providing services. Creating online accounts, logging on with passwords, and then filling in online forms has become increasingly onerous for users, especially when the forms and systems are problematic or don’t have options for what the consumer wants to enquire about. Such systems also take little consideration of the needs of people with disabilities or ageing with dementia who often have very great difficulty in interacting with the technology.
Users having to download information, rather than receiving it automatically at their convenience. Centres, be they companies or organisations, now almost universally require users to log on to their systems and go through complex, time-consuming protocols to gain access to the information that centres wish to disseminate (banks, financial organisations, and utility companies are notorious for this). In the past, such material was delivered to users’ letter boxes and could simply be accessed by opening an envelope. Again, this is to the benefit of the centres rather than the users.
Useless Chatbots, FAQs, online Help options and voice options on phone calls. Numerous organisations require consumers/users to go through digital systems that are quite simply not fit for purpose and often take a very considerable amount of users’ time (and indeed costs of connectivity). While some systems do provide basic information reasonably well, the majority do not, and require users to spend ages trying to find out relevant information. Many organisations also now make it very difficult for users to find alternative ways of communicating with them, such as by telephone. Even when one can get through to a telephone number and negotiate the lengthy and confusing numerical or voice recognition options, it frequently takes an extremely long time (often well over 30 minutes, or a 16th of the working day) before it is possible to speak with someone. Sadly, human responders once contacted are also often poorly trained and frequently cannot give accurate answers.
Having to use yet another digital system chosen by centres and leaders to exploit you in their own interests. There are now so many different online cloud systems for communicating with each other at work (or play), such as Microsoft’s Teams, Google’s Workspace, Slack, Trello, Asana, and Basecamp (to name but a few). None of us can expect to be adept at using all of them. However, leaders of organisations and teams generally impose their own preferred software solution (or those ordained by their organisation) on members. Rarely are they willing to change their own preferences to suit those of other team members. Hence, this reinforces power relationships and those lower down the food chain are forced to comply with solutions that may well not suit them.
Filling in forms online that are badly designed, crash on you, and often don’t have a save function for partially completed material. I am finding this to be an increasingly common and very frustrating form of hidden abuse. The number of times I have had to fill in forms online that take far longer than just writing a document or sending an e-mail is becoming ever greater. This is particularly galling when the software freezes or the save function does not work, and everything gets lost, forcing me to start all over again. The hours I have lost in this way (particularly in completing documents for UN agencies) are innumerable.
Time wasted in having to scroll through quantities of inane social media to find a message that someone has sent you and is complaining that you have not yet responded to it. The answer to this is simply not to use social media, and especially groups (see my practices), or to “unfriend” people who do this, but increasingly this is yet another means through which centres seek to control and exploit those at the peripheries or lower down the work hierarchy.
Centres simply failing to respond to digital correspondence, especially with complaints, and forcing users to keep chasing them online. I have lost count of the number of times I have had to fill in an online form, usually about something I have been asked to do by a company or agency, or concerning an appointment or complaint, only for them never to reply. This forces me to waste yet further time trying to contact them about why they haven’t responded!
This list of examples could be added to at great length, and mainly reflects my own current angst (for earlier examples see On managerial control and the tyranny of digital technologies). To be sure, not all digital systems are as appalling as the above would suggest, and credit should be given where due. The UK’s digital service, https://gov.uk is generally a notable positive exception to this generalisation, and I was, for example, very impressed when I recently had to use it to renew a passport. However, to change this situation it is necessary to understand its causes, the most important of which are discussed below.
The rise of digital capitalism and the causes of digital oppression
Five main causes lie at the heart of the above challenges. Underlying them all, though, is the notion that it is right and proper for companies to seek to expand their markets and lower their costs of production in the pursuit of growth. Capital accumulation is one of the defining (and problematic) characteristics of all forms of the capitalist mode of production, and new digital technologies have two key attributes in supporting this process: first, the use of digital tech very rapidly accelerates all forms of human interaction; and second, their use can replace much human labour (thus increasing the human labour productivity of those remaining in employment) . On the assumption that the cost of introducing digital tech is cheaper than the cost of human labour, then digital tech can be used dramatically to increase the rates of capital accumulation and surplus profit acquisition by the owners of the means of production. However, if there is insufficient demand in the market, not least because of falling purchasing power as a result of reduced levels of human labour, then the twin crises of realisation and accumulation will inevitably ultimately cause fundamental problems for the system as a whole. It must also be realised that (as yet) digital tech does not actually have any power of its own. The power lies with those who conceive, design, construct and market these technologies in their own interests. As the apparent AI ethical crisis at the moment clearly indicates, the scientists who support this process are as much to blame for its faults as are the owners of capital who pay them. Five aspects of this underlying principle can be seen at work in leading to the current situation whereby those at the system peripheries or the bottom of hierarchies are being increasingly oppressed through the uses of digital tech (as described in the examples above):
First, labour costs have generally long been perceived as being the critical cost factor in many industrial and commercial sectors. The digital tech sector has therefore been very adept at persuading other companies and organisations to do away with human labour and replace it with technology in the productive process. The labour that is left must be forced to work longer hours while also increasing its productivity. However, companies and organisations have also been persuaded that they can make further significant cost savings by ensuring that consumers and staff lower down the hierarchy do much of the work for themselves by, for example, filling in online forms and using chatbots as discussed above. Digital tech is used to shift the balance of time spent on tasks to the consumers or users. This insidious shift of emphasis is a classic expression of the digital oppression that is now increasing being felt by people across the world.
A second significant feature of capitalist enterprises is their need to create as uniform a market as possible so that they are then able sell as many of the same products or services as they can. This emphasis on uniformity requires users to adjust their previously diverse human behaviours to conform to the uniform digital systems that are imposed on them. It lowers overall costs, and enables markets rapidly to be expanded. We experience this every time we have to choose which of a number of options we are given on a phone call, or fill in an online form, where what we are concerned about does not easily fit in to any of the options we are given. Similarly, we encounter it every time someone wanting us to do something requires us to use their software package or app rather than our preferred one. Again, we encounter a different form of digital oppression.
Third, the increasing emphasis and reliance on digital systems means that the human labour remaining in organisations and companies becomes increasingly overstretched. Without adding to the amount of time that they work, staff having to use digital systems through which they are constantly bombarded with requests and actions become ever more oppressed. Furthermore, the difficulty of finding qualified and knowledgeable staff competent enough to give a good service to clients and customers, means that organisations are increasingly not capable of responding satisfactorily to those who don’t fit into the uniform-demanding digital systems that they now operate. This is why some companies make it as difficult as possible for clients and customers actually to speak with a human being among their staff, and why the quality of service they provide can be so bad. Some turn to call centres overseas, which often provide a dire service on poor quality phone lines staffed by people who cannot competently speak or understand the language of the customers.
Fourth, much of the software and systems that governments, organisations and companies are persuaded to buy by the tech sector is poorly designed, poorly constructed and poorly implemented. As but one example, in 2015 the abandoned NHS patient record system in the UK had “so far cost the taxpayer nearly £10bn, with the final bill for what would have been the world’s largest civilian computer system likely to be several hundreds of millions of pounds higher, according a highly critical report from parliament’s public spending watchdog” (The Guardian, 2015). The quality of design and programming in many apps, especially when outsourced to countries with very different cultures of coding, is often very low, and it is unsurprising that the functionality of many digital systems is so dire. Despite much rhetoric about human-computer interaction and user-centred design, the reality is that much tech is still built by people with little real knowledge and expertise in what users really want and how best to make it happen. All too often, they are themselves brought up within the culture of uniformity that limits real quality innovation.
Finally, the scientism (science’s belief in itself) that has come to dominate the tech sector and its role in human societies has largely served the interests of the rich and powerful, not least through the hope that aspirant digital scientists have to join that elite themselves. Ultimately, this serves the interests of the few rather than the many. Those on the peripheries or at the lower end of hierarchies have instead become increasingly oppressed and enslaved as a result of the propagation of digital tech across all aspects of human life (see my Freedom, enslavement and the digital barons: a thought experiment). It is becoming ever more crucial to challenge scientism, and counter the belief that science in general, and digital tech in particular, has the ability to solve all of the world’s problems.
What’s to be done
None of these challenges and none of the reasons underlying them need to be as they are. There is nothing sacrosanct or inevitable about the design, creation and use of digital tech. We do not need it to be as it is. It is only so because of the interests of the scientists who make it and the owners of the companies who pay them to do so.
There are numerous ways through which we can challenge the increasingly dominant hegemony of the digital tech sector in human society at both an individual and an institutional level. I concentrate here on suggestions for individual actions that can help us regain our humanity, leaving the discussion of the important regulatory transformations that are essential at a structural level for a future post. After all, it is only as individuals in our daily actions that we can ever regain any real power over the structures that oppress our “selves”. Any actions that can help change the underlying structures and practices giving rise to the oppressions exemplified at the start of this post are of value, and they will vary according to our individual space-time conjuctures. I offer the following as an initial step to what might be termed a revolutionary practice of digital freedom:
Create multiple identities for ourselves. As individuals we are much more complex than the uniformity that digital systems wish to impose on us. We are so much more than a single digital identity. Hence, we must do all we can to create multiple identities for ourselves as individuals, and resist in every way possible attempts to control and surveil us through the imposition of such things as single digital identities.
We must resist being forced to use specific digital technologies. We should always refuse to use digital tech when we can do something perfectly well without it. We must likewise very strongly resist attempts by companies, governments and organisations to force us to use a single piece of tech (hardware or software) to do something, and always demand that they provide a solution through our individually preferred technologies. At a banal level, for example, if you are happy with using Zoom and Apple’s Keynote, Mail, Numbers and Pages, you should never be forced by anyone to use Microsoft Teams or Google Workspace. If people or organisations are not willing to adapt to your individual needs they are probably not worth working with (or for) anyway. Many societies now require restaurants to provide details of all possible food preferences and allergies, so why should we accept being oppressed by digital tech companies who only wish us to conform to one uniform system?
We should never accept poor quality digital systems. If you cannot do something you want to through an organisation’s digital systems, then it is always worth complaining about it. Writing a letter of complaint, copied widely to relevant ombudsmen, is not only quicker than trying to use poor quality tech systems, but numerous complaints can cumulatively help to change organisations.
We must always challenge scientism, and emphasise the importance of the humanities in answering the questions that scientists cannot answer. Our particular structure of science primarily serves the interests of scientists, who work in very particular ways. This model of science is overwhelming dominant in the way in which digital tech is created. Although scientists can produce impressive results, they are not the guardians of all knowledge, and they are by no means always right. Almost every theory that has ever been constructed, for example, has at some later time been disproved. We must therefore resist all efforts to make science (or STEM subjects) dominant in our education systems. We must cherish the arts and humanities as being just as valuable for the future health of the societies of which we are parts.
We should identify and challenge the interests underlying a particular digital development. All too often innovations in digital tech are seen as being inevitable and natural. This is quite simply not the case. All developments of new technology serve particular interests, almost always of the rich and powerful. To create a fairer and more equal society this must change. The scientists who have developed generative AI, for example, are completely responsible for its implications, and it is ridiculous that they should now be saying that it has gone too far and should somehow be controlled. They did not have to create it as it is in the first place.
We need to implement our own digital systems to manage emails and social media. It is perfectly possible to reduce the amount of digital bombardment that we receive, but we need to manage this consciously and practically (see my Reflections on e-mails). Simple ways to start doing this are: file all copy correspondence separately; always remove yourself from mailing lists unless you really want to receive messages (you can always rejoin later); limit your participation in social media (especially WhatsApp) group; and keep a record of the time you spend each day doing digital tasks (it will amaze you) and think of how you could use this time more productively!
Take time offline/offgrid to regain our humanity. It is perfectly possible still to live life offline and offgrid. Many of the world’s poorest people have always done so. The more we are offline, the more we realise that we do not need always to be connected digitally. Some time ago I created the hashtag #1in7offline, to encourage us to spend a day a week offline, or, if we cannot do that, an hour every seven hours offline. Not only does this reduce our electricity consumption (and is thus better for the physical environment), but it also gives us time to regain our experience of nature, thereby regaining our humanity. The physical world is still much better than the virtual world, despite the huge amount of pressure from digital tech companies for us to believe otherwise. Remember that if we don’t use physical objects such as banknotes and coins, or physical letters and postcards, we will lose them. Think, for example, of the implications of this, not least in terms of the loss of the physical beauty of the graphics and design on banknotes or stamps, key expressions of our varying national identities (not again that digital leads to bland uniformity). Remember too that every digital transaction that we make provides companies and governments with information about us that they then use to generate further profit or to surveil us ever more precisely. Being offline and offgrid is being truly revolutionary.
This post argues that a coalition of interests around economic and demographic growth has not only created significant inequalities across the world, but has also been the main factor driving global environmental degradation. It is demographic growth in combination with a particular form of tech-led capitalist economic growth that has been the main driver of global environmental change, of which climate change is but a small part.
Economic growth has for many decades been seen by economists and international organisations alike as the key means through which poverty can be eliminated, especially in the economically poorer countries of the world. This powerful mantra lay at the heart of the Millennium Development Goals (MDGs, 2000-2015) and has more recently been central to aspirations for the achievement of the Sustainable Development Goals (SDGs, 2015-2030). Yet, as I have frequently argued elsewhere,[i] these aspirations have never been achieved, they focus on absolute poverty rather than relative poverty, and the resultant unfettered economic growth has almost always been associated with an increase in inequalities. For those concerned with equity and who define “development” primarily as the reduction of inequalities, policies designed to increase growth alone are doomed to failure and need to be replaced.
National policies and international frameworks focused on growth primarily support the interests of those private sector companies and global corporations that have worked so assiduously to shape the UN rhetoric around economic growth and innovation. Digital tech companies have long been at the forefront of this, not only driving growth, but also reaping the benefits of so doing.[ii] Economic growth is deemed to be essential both to expand markets and also to increase labour productivity, whereby owners of the means of production can extract surplus value.
In trying to consider alternative models of socio-economic activity, I have often used the notion a “no-growth” economy as a heuristic device, encouraging audiences to consider how economic activity might be organised if growth was somehow prohibited. Although there are many potential outcomes, one of the most interesting is the thought that the pressures to achieve a reduction in inequalities might increase under such conditions, thus leading to a fairer and more equitable society. I have also found the work of the Post-Autistic Economics Network to be a helpful source of inspiration, challenging as it does many of the usually taken for granted assumptions of neo-classical (and indeed neo-liberal) economics.[iii]
Recent debates about the balance between the positive and negative impacts of demographic growth on the economy have highlighted their inextricable intertwining with the rhetorics of economic growth.[iv] On the one hand there are those who argue that ageing populations with few young and economically productive people are deeply problematic for economic growth, and that policies to encourage higher birth rates or immigration are essential to enable economic viability. Years ago, I thus well remember the French advertising campaign to encourage families to have more children, beautifully encapsulated in this postcard:
On the other, are those who point to a demographic dividend in Africa, through which increasing numbers of young people are going to drive the economy forward, fuelled especially by the potential of digital tech. See for example, this image below from Invest Africa in an article entitled How can Africa harness its demographic dividend (and note its emphasis on digital tech).
Both arguments are deeply problematic. In the African case, this naïve dream is only going to be possible if young people are well educated and jobs are available for them; it seems more likely that this will actually be a demographic millstone rather than a dividend. The “problem” of an ageing population likewise only becomes serious if systems are put in place to extend human life at high cost for long periods of time, or if labour productivity stagnates or declines.[v]
Much of the international debate concerning demographic change has been articulated around its interconnectedness with economic growth. Put simply, the interests underlying the continued drive for economic growth are frequently the same as those that advocate for population increase as being positive and that technology can continue to ensure a healthy lifestyle for a very much larger human population. Rather less interest has surprisingly been devoted to what human experiences of such changes might be. This is especially so when the twin mantras of economic growth and demographic growth are confronted by their combined impact on the environment. This is particularly evident in the reactions over the last 50 years to The Club of Rome’s 1972 report on Limits to Growth,[vi] and to the much more recent and controversial film Planet of the Humans, produced by Michael Moore in 2019.
Limits to Growth, Planet of the Humans and the legacy of Thomas Malthus
In 1972, the Club of Rome published its prescient report entitled Limits to Growth, which argued that if the then growth trends in population, industrialisation, resource use and pollution continued unchecked, then the carrying capacity of the earth would be reached some time within the following century.[vii] I remember distinctly the wake-up call that this provided for me as an undergraduate, and thinking back to those days have been fascinated by how its message seemed increasingly to be ignored in the ensuing decades. Few countries apart from China (see below) really responded to this message, although some such as India made tentative efforts to address it. I distinctly remember, for example, being in Sonua market in what was then South Bihar (now Jharkhand) in 1976 and seeing this painted slogan of two parents and two children that formed part of the government’s 20 point programme during the 21 month state of emergency declared by Indira Gandhi.
India’s population was then 637.45 million; in 2023 it is 1,428.63 million. The policy was not a success.
Interestingly, 30 years after the Club of Rome report, the authors published an update, in which they concluded that “it is a sad fact that humanity has largely squandered the past 30 years in futile debates and well intentioned, but halfhearted, responses to the global ecological challenge”. This is an overly generous observation, largely because of the very specific interests that have underlain economic and demographic change in subsequent years. In essence, as noted above, the owners of the world’s major companies, supported by many economists have argued convincingly that both economic and demographic growth are essential for the future success of humanity, that the new SDGs are indeed sustainable,[viii] and that technology can continue to provide innovative solutions to the increasing problems caused by the pressure of people on the planet. I find it extraordinary to think that in my lifetime the world’s population has risen by 288% from 2.77 billion people to 8 billion people. What I find more frightening, though, is that there is nothing in the UN’s development goals really about population growth,[ix] and there was almost universal condemnation in the world’s capitalist countries when China adopted its 1 child per family policy when it was introduced in 1980.[x] Widespread criticism of the Club of Rome’s report and others who held their views was based primarily on the grounds that they were neo-Malthusian,[xi] and that the world was coping perfectly well, in large part through technological advances that were overcoming the challenges of an increasing population. Indeed, the observation that very much higher levels of population have been able to live on the planet over the last 50 years would seem to support such a view. However, this fails to recognise that very many of those people live in abject poverty and misery, and that the environmental impact of such growth has been very significant indeed. Unfortunately, much of the focus of the international community has been captured by the rhetoric around climate change, which has served to reduce emphasis on the wider environmental impact caused by the double mantra of economic and demographic growth. Climate change causes nothing; it is the factors giving rise to changes in the climate that are the ultimate cause and the real problem that needs addressing.
These issues were brought to the fore by the film Planet of the Humans produced by Michael Moore, and directed by Jeff Gibbs in 2019. This has been very widely criticised by those within the so-called environmental and green lobbies on the grounds that it was outdated and misleading, especially concerning the scientific evidence and more recent developments in renewable energy. However, many of these criticisms miss the fundamental point of the film, which was that our economic system, based on the present model of capitalist growth is fundamentally unsustainable, particularly in the context of continued demographic growth.[xii]
Many of these arguments might appear to smack of neo-Malthusianism which has been almost universally condemned from a wide range of angles, as were the criticisms of Malthus’ original works.[xiii] Engels, writing in 1844,[xiv] put it this way: technological and scientific “progress is as unlimited and at least as rapid as that of population”. Many continue to agree with Engels’ proposition, or at least hope that he was right. However, the scale of human impact on the environment today is vastly different from when Malthus first wrote his Essay on the Principle of Population at the end of the 18th century, and the world’s population is now more than twice as much as it was when Limits to Growth was first published. People are seriously talking about and investing in the colonisation of outer space to provide continued sustenance for the world; technology once again to the fore. My emphasis in this piece, though, is not so much to take issue with the many diverse arguments of those who challenge neo-Malthusianism, but rather, and much more simply, to suggest that the dominant global focus on climate alone is hugely damaging because it fails to address the wider environmental impacts of our thirst for growth.
“Climate change” has become a popular focus of concern and political protest, but as I have argued extensively elsewhere[xv] it is a deeply problematic notion conceptually, especially when abbreviated to just these two words “climate” and “change”, ignoring the words “human” and “induced”. All too often, it is used in a way that externalises it as being somehow separate from the human actions that cause weather patterns to change, while at the same time also implying that humans can somehow solve it without addressing the deeper structural problems facing the world. Likewise, all too frequently, the answer to the problem of “climate change” is naïvely deemed to be an over-simplified reduction in carbon emissions. Leaders of the digital tech sector, with their voracious appetite for growth and innovation are eager to comply with this agenda, while failing almost completely to recognise the enormous harms that they are causing to other aspects of the environment. By focusing largely on “climate change” they can feel good whilst also maintaining their life blood of economic and demographic growth that drives their creation of profit.
This is most definitely not to suggest that changes in temperature, rainfall, and wind patterns are unimportant; very far from it. But it is to argue that these are caused fundamentally by the twin mantras of economic and demographic growth that have increasingly dominated the world over the last century, rather than by some exogenous notion of climate change. More worryingly, these mantras have been fuelled still further by the unachievable and unsustainable Sustainable Development Goals that have become part of the problem rather than a solution. Contrary to much popular rhetoric, the very dramatic increases in global carbon emissions do not appear to have begun until the beginning of the 20th century, and coincide very closely with increases in world population.[xvi] Put another way, had global population not increased as dramatically as it has done over the last century, then those living here would not have been faced with the impending crisis that we now urgently need to address.
Moreover, and I would suggest more importantly, the emphasis on “climate change” has largely distracted attention from the crucial effort that must be placed on the wider environmental impacts of economic-demographic growth. Climate is but a small part of the physical environment, which includes the lithosphere, biosphere and hydrosphere, alongside the atmosphere. By focusing so heavily on climate, and ways that digital tech can be used to reduce carbon emissions, activists, academics, politicians, business leaders, civil society organisations and citizens alike are missing the bigger picture. The design and use of digital tech is causing significant environmental harms that tend to be ignored in the search for a solution to climate change.[xvii]
In conclusion: a new beginning
This post has contributed to my previous body of work by articulating five main inter-related propositions:
There has been a coalition of interests between those advocating economic and demographic growth, largely reflecting the determinant structures of contemporary global capitalism.[xviii]
This is archetypically reflected in the power of the digital tech sector, which has permeated the UN system.[xix]
The dramatic impact of the digital tech sector on the wider physical environment has been largely hidden by an overwhelming global emphasis on climate change, and ways through which digital tech can reduce carbon emissions.
It is important to understand climate change as a result and not a cause, and therefore focus on doing something about the real causes of climate change (the economic-demographic growth mantra) rather than primarily addressing carbon emissions.
It is essential to understand changes to the climate as but a part of the much wider negative environmental impacts of the coalition of interests underlying the economic-demographic growth mantra.
Are we facing a new era of increasing mass-migration, famine, disease and warfare? Is the economic growth model that has dominated the last century going to consume itself in a falò delle vanità? Might there be less inequality and poverty in the world if there were fewer people and the wealth that was created was shared more equally? Can we imagine a beautiful physical environment that could be created out of the desolate and scourged world we are currently creating? How might digital tech be used to serve the interests of the poorest and most marginalised more than those of the rich and powerful? These questions are all inter-related, and we need to find answers to them before it is too late.
[iii] For a brief history, see http://www.paecon.net/HistoryPAE.html; see also Stiglitz, J.E. (2019) People, Power and Profits: Progressive Capitalism for an Age of Discontent, Allen Lane, and Stiglitz, J.E. (2002) Globalization and its discontents, New York: W.W. Norton & Company
[v] Efforts by the Digital Barons (leaders of major US digital corporations) to extend human life far beyond its present span, such as those by Zuckerberg (see CNET, 2013), Larry Page (founding Calico, an Alphabet subsidiary, in 2013), Jeff Bezos (with his investment in Altos Labs, MIT Technology Review in 2021) and Larry Ellison (founder of Oracle, investing in ageing research, see Time, 2017) to name it a few are deeply worrying, both because only the rich will be able to afford such treatments, but also because they will inevitably mean an even greater population load on the planet; Elon Musk’s reported criticism of such practices (The Independent) is about the only occasion I have ever agreed with him about something!
[xi] See further below on Thomas Malthus; in essence, critics of neo-Malthusianism have suggested that these arguments were overstated and premature, and that technology would enabled very much higher population levels to be sustained.
[xvii] See http://desc.global which is attempting to understand the relative balance between environmental harms and benefits of digital tech.
[xviii] In essence, demographic growth has been co-opted to serve the interests of the private sector (capitalism) in seeking to overcome the tendency towards a falling rate of profit. Put simply, population must grow to provide both an expanded market and more labour to ensure economic growth.
It was a great honour to be asked by a group of young Chinese interns at the United Nations University Institute in Macau to give a short keynote address at the hybrid event that they were organising from there on 30th April in partnership with The Institute for AI International Governance of Tsinghua University (I-AIIG), forming part of the World Data Forum satellite event being convened by the Institute in the city of Macau. As their introduction to the event summarised:
The younger generation are often seen as digital natives who have more exposure and access to data technology than older generations. They are also more likely to use data technology for learning, innovation, participation and empowerment. However, this also means that they face unique opportunities and challenges related to data that need to be explored and addressed. As the satellite event of this year’s World Data Forum, this youth forum will take “Digital and Youth” as the main theme, adhere to youth leadership and youth participation, aiming to provide a platform for dialogue and exchange among different stakeholders who are interested in or affected by data and its impact on youth.
In the brief 15 minutes available, I chose to focus on three proposals:
We need new, more inclusive modes of inter-generational dialogue about digital
Just because it is possible to do something, does not mean that it is right or good to do so.
Digital tech is all too often assumed to be inherently good – but we need to mitigate the harms to ensure any good can prevail
We must all consider the environmental impact of data, and digital tech more widely.
Digital tech is often the cause of environmental harms rather than a solution
The event was great fun, and the organisers had brought together many leading young academics from across China working on digital tech in general, and data in particular, divided into four main sessions:
Youth Work on Digital Humanities in Empowering the Cultural Legacy
Digital technology and Wellness
Artificial Intelligence Cutting Edge
Personal Information Protection and Data Security Governance
Many thanks to everyone involved for making this such an interesting and enjoyable experience.
I have frequently been asked in recent weeks about my thoughts on the UN Secretary General’s Global Digital Compact (GDC). It is far from easy to summarise these, not least because the actual compact is not due to be agreed until the “Summit of the Future” in September 2024. Any such comments can therefore only be about its overall objectives and the process so far. However, I am deeply sceptical of both, and consider the compact to be fundamentally flawed in concept, design and practice. In essence, it largely reflects an elitist view, dominated heavily by the corporate tech sector, focused on a technologically deterministic ideology, that will do little or nothing to serve the interests of the poorest and most marginalised.[i]
For those who don’t have time to read this entire post, it argues in essence that:
The Global Digital Compact is a result of the ways in which the ideologies and practices of digital tech companies have come to dominate UN rhetoric arounddigital tech;
The issues it addresses, the questions it asks, and the ways in which the consultation is constructed, largely serve the interests of those companies, rather than those of the world’s poorest and most marginalised individuals and communities; and
It fails to address the most significant issues pertaining to the role of digital tech and the science underlying it, notably the future relationships between machines and humans, the environmental harms caused by the design and use of digital tech, and the increasing enslavement (loss of freedoms) of the majority of the world’s people through and by the activities of digital tech companiesof all sizes.
For the long read, read on… (also available as a .pdf here).
Context of the Global Digital Compact.
As the Digital Watch Observatory has so accurately commented, “The GDC is the latest step in a lengthy policy journey to have, at least, a shared understanding of key digital principles globally and, at most, common rules that will guide the development of our digital future”. Like all such initiatives, however, it reflects a very specific set of interests, and it is helpful to begin by briefly trying to unravel these.
There has been concern for a long time about the increasingly large number of overlapping international multi-stakeholder gatherings that have been created by different interest groups to discuss the interlinkages between digital tech and human life (for a detailed discussion of the origins of these, see my Reclaiming ICT4D, OU, 2017). Three are particularly interesting: ICANN, WSIS, and the IGF. The Internet Corporation for assigned Names and Numbers (ICANN) created in 1998 was initially designed as a mechanism to transfer the policy and technical management of the DNS to a non-profit organisation based in the USA, and largely reflects private sector interests in the Internet. The World Summit on the Information Society (WSIS) process began with Summits in Geneva and Tunis in 2003 and 2005, that brought together UN agencies, governments and the private sector, and has since evolved to discuss and report on 14 action lines relating to the “information society”. In large part it serves the interests of UN agencies responsible for delivering on these in the context of the SDGs. The claim that WSIS initially placed insufficient emphasis on the needs and interests of civil society led to the foundation of the Internet Governance Forum (IGF) first convened in 2006 essentially as a discussion forum without any direct decision-making authority.
All of these processes and institutions make claims to multi-stakeholderism (but define these in rather different ways), and all frequently discuss very similar themes and topics, again largely reflecting the varied interests of those participating. Many of the same people (or those who can afford it) are to be found at all three gatherings, discussing similar issues in similar cavernous conference centres. In addition to these three main international gatherings, countless other more focused series of gatherings and events are held, such as those convened by ISOC and IEEE, alongside the regular series of digital events convened by different UN agencies such as the ITU, UNCTAD and UNESCO, as well as specific conferences such as the ICT4D series or the GCCS London Process (Global Conference on Cyber Space) meetings between 2011 and 2017 that initially focused on cybersecurity. Again each of these represents and serves the interests and agendas of different interest groups.
A fundamental problem with the sheer quantity and frequency of these gatherings is that only large, powerful and rich entities are really able to participate in them all. Despite the efforts of many convenors to make some of these events more open and accessible, online and hybrid events have not yet really made a significant positive impact into opening up international discourse on digital tech and the Internet, so that small states and economically poorer entities can participate fully and effectively. Frustration with the proliferation of such meetings, and the urgency of the issues relating to digital tech for the planet and its human inhabitants has therefore precipitated calls for there to be a single, overarching framework for coordination. At first sight, this may seem to be a reasonable proposition, but it is essential to dig beneath the surface to understand the interests underlying the formulation of the Global Digital Compact, and its likely impact and conclusions. It is these interests that have shaped the new discourse, and especially the questions being asked in the ongoing global consultation due to close at the end of April 2023 These reflect a particular agenda, that will not serve the interests of the mass of the world’s population, and especially the poorest and most marginalised.
I remember about a decade ago talking with a young and enthusiastic member of the UN’s Office of Information and Communication (OICT) who surprised me by saying that they intended to take over all co-ordination of digital tech within the UN system. He came from a technical background, and appeared to know little about the vast amount of work that had been done in recent years by those of us working at the interface between technology and “international development”. In origin, the OICT was essentially the entity providing UN personnel with appropriate digital tools and processes to collaborate effectively, and in my understanding at that time it was nothing to do with the UN’s support for global policy making or programme/project implementation relating to digital tech on the ground.[ii] Other UN bodies such as the ITU, UNESCO, UNDP, and UNDESA had years of experience in supporting global digital policy and practice. This conversation nevertheless reflected four crucial features: competition within the UN system; the power and ambition of people within the UN Secretariat based in New York (USA); the dominance of a technical and scientistic perspective; and the energy and arrogance of youth. I thought little more of this conversation, unwisely dismissing it as mere aspiration, that could not possibly succeed, especially given the good work being done on digital tech for development (or ICT4D) by my many good friends in other UN agencies. Little did I know then about some of the ways in which the UN system operates, and the interests that it serves.[iii]
At about the same time, there was widespread ongoing discussion within the UN system and beyond about the post-2015 development goals. I had personally argued vehemently that the world needed some very clear statements, and perhaps targets, relating to digital tech in the proposed new goals, but there seemed little appetite for this among most of those involved in shaping them.[iv] In my role as Secretary General of the Commonwealth Telecommunications Organisation (CTO), I nevertheless co-ordinated a statement on the role of ICTs in the post-2015 Development Goals by all of our members (mainly governments but also companies), which was published on 7 October 2014 laying out 8 principles, and proposing one goal and three targets. The document concluded that “For ICTs to be used effectively for development interventions, there must be affordable and universal access”. Ironically, it took the UN system (The Office of the United Nations Secretary-General’s Envoy on Technology and the International Telecommunication Union) until April 2022 to create a set of 15 aspirational targets for 2030 that were intended to achieve “universal and meaningful digital connectivity in the decade of action” (see further below). I cannot help but think that I should have pushed even harder for the proposal that we crafted eight years earlier within the CTO. If we had been able to achieve what we then proposed, much of the subsequent turmoil and wasteful infighting represented by the recent actions of the UN Secretariat could have been avoided.
In July 2018, the UN Secretary General’s office then announced the convening of a High-Level Panel on Digital Cooperation (HLPDC) “to advance proposals to strengthen cooperation in the digital space among Governments, the private sector, civil society, international organisations, academia, the technical community and other relevant stakeholders”.[v] It is not easy to identify exactly how and why this process was initiated, especially when reasonably good co-ordinating mechanisms already exist within the UN system, notably the Chief Executive’s Board (CEB) and the High-Level Committee on Programmes (HLCP).[vi] However, the composition of the Panel would seem to support the persistent rumours that a former President and CEO of ICANN might have persuaded the government of an Arab Gulf state, both with strong private sector connections, to lobby the UN Secretary General’s Office to create such a panel. The panel itself had 20 members, who according to its terms of reference were meant to be “eminent leaders from Governments, private sector, academia, the technical community, and civil society led by two co-chairs”.[vii] The two co-chairs (Melinda Gates and Jack Ma) were both heavily involved in successful private sector entities and had little prior engagement in implementing programmes that might beneficially impact the world’s poorest and most marginalised through digital tech. Although half of the panel were women, and there was indeed also some “youth” representation, the overall panel was almost exclusively made up of individuals from the private sector, rich countries, and academics with interests in innovation and the latest advanced technologies. Only three people had any substantial involvement with civil society, and the voices of the poor and marginalised, especially from small island developing states (SIDS) were largely absent. I would even venture to suggest that almost none of the panel had any real practical engagement on the ground with, or substantial understanding of, the use of digital technologies in international development, other than from a top-down, corporate or scientistic perspective (see more below). However, the small secretariat was led by two people, one of whom did indeed have substantial expertise and understanding of many of the crucial issues around the use of digital tech in development.
Once created the panel did then consult quite widely. As the Geneva Internet Platform (digwatch) summarised, “Between June 2018 and June 2019 the Panel organised several in person meetings, discussions, workshops, international visits to the Silicon Valley, China, India, Kenya, Belgium and Israel as well as online meetings”. This led to the publication in June 2019 of the panel’s short report The Age of Digital Interdependence.[viii]Many of the people participating in these meetings did indeed have good experience of the interface between digital tech and international development, and a considerable number of civil society organisations also participated in the discussions. However, I was struck by three things: first, the questions being asked mainly reflected the interests of the UN Secretariat and those on the panel; second there was very little new being said; and third the choice of countries visited excluded many of the poorest and most marginalised.[ix] Many, if not most, of the participants in the consultations were regular attendees at global gatherings such as the IGF, WSIS annual forums and ICANN meetings, and their collective knowledge already existed in the global community. It was fun to meet up with them again in a new virtual space, although many of us reflected during the process that we were just repeating what we had long been saying many times previously. There was absolutely no need to go to the expense and complexity of creating a panel of “experts” who actually had little real knowledge themselves of the key issues.
The outcome of these deliberations was nevertheless presented in June 2020 as the Secretary-General’s Roadmap for Digital Cooperation. In large part this reflects some fine work by the HLPDC secretariat in trying to mesh these discussions with existing and well-established principles of good practice in the field. The roadmap highlighted eight key areas for action:
Achieving universal connectivity by 2030—everyone should have safe and affordable access to the internet.
Promoting digital public goods to unlock a more equitable world—the internet’s open source, public origins should be embraced and supported.
Ensuring digital inclusion for all, including the most vulnerable—under-served groups need equal access to digital tools to accelerate development.
Strengthening digital capacity building—skills development and training are needed around the world.
Ensuring the protection of human rights in the digital era—human rights apply both online and offline.
Supporting global cooperation on artificial intelligence that is trustworthy, human-rights based, safe and sustainable and promotes peace.
Promoting digital trust and security— calling for a global dialogue to advance the Sustainable Development Goals.
Building a more effective architecture for digital cooperation—make digital governance a priority and focus the United Nation’s approach.
It is scarcely surprising that all of these had featured prominently in the WSIS Action Lines that were developed during and following the summits in 2003 and 2005. There was very little at all new in them, although of course they were presented as being novel and important.[x] Moreover, the roadmap also included the rather bizarre statement that “the United Nations is ready to serve as a platform for multi-stakeholder policy dialogue on…emerging technologies”.[xi] Somehow, the entire effort of UN agencies over the last decade, when the UN was already providing platforms for such dialogue seemed to have been quietly ignored. I have long puzzled over this, but on reflection it is only really intelligible in the context of my earlier discussion with staff at OICT. What it really seems to have meant was that the UN Secretariat under the Office of the Secretary General was now going to take central stage in providing that platform. This was reiterated in the UN General Assembly’s assertion in 2020 (GA resolution 75/1) that “the United Nations can provide a platform for all stakeholders to participate in such deliberations.” This only makes sense if it refers to the central Secretariat of the UN providing the platform.
The UN Secretary General then proceeded with establishing the office of his Envoy on Technology, and in January 2021 appointed the former Chilean diplomat and long-term UN official Fabrizio Hochschild[xii] to the role, despite being aware that complaints had previously been raised about his behaviour. If that was not worrying enough, immediately on his appointment Hochschild acknowledged on Twitter that he did not know much about the interface between digital tech and international development:
Five days after his appointment, Hochschild was placed on leave, pending an investigation into his behaviour, and a year later it was reported that he was no longer employed by the UN. It is very hard to understand how the UN Secretary General could have appointed someone with so little knowledge of the field, and with such a dubious track record of behaviour in the UN to such an important role.[xiv] Either it reflects incompetence, ignorance, or once again the effect of specific interests working behind the scenes within the UN system to achieve both individual and organisational goals.
The Office of the Tech Envoy nevertheless continued its work under the interim leadership of the Assistant Secretary-General for Policy Coordination and Inter-Agency Affairs. In September 2021 the UN Secretary General then produced his next report, Our Common Agenda, which followed on from GA resolution 75/1 a year earlier. This rambling (wide-ranging) and aspirational document was in part an attempt to salvage something from the impending wreckage of Agenda 2030 and the SDGs. As its summary states, “Our Common Agenda is, above all, an agenda of action designed to accelerate the implementation of existing agreements, including the Sustainable Development Goals”.[xv] The seventh of its twelve commitments was on improving digital cooperation, and slimmed down the earlier list of issues in the Roadmap… to seven key proposals forming an agenda for the new Global Digital Compact:
Connect all people to the internet, including all schools
Avoid internet fragmentation
Apply human rights online
Introduce accountability criteria for discrimination and misleading content
Promote regulation of artificial intelligence
Digital commons as a global public good
However, Our Common Agenda says little as to how these are to be achieved. It has been fascinating to watch the activity of senior UN officials and their staff in different agencies scurrying to position themselves in response to these proposals, seeking to protect their existing portfolios of activities and gain advantage over others in delivering these agendas. The initiative has, though, in some instances also led to increased dialogue and positive collaboration between like-minded individuals and agencies.
Our Common Agenda thus provided the foundations for the Global Digital Compact which will be agreed at the ambitiously titled Summit of the Future in September 2024. The important thing to remember about this is the interests that underlie its creation as outlined above. These are primarily global capital, the advocates of neo-liberalism, and the rich and powerful states and para-statal entities, as well as the UN and its agencies. This is all too evident in the language used in Our Common Agenda. Some examples of this include statement such as:
“The Fourth Industrial Revolution has changed the world” (p.62). This is a damaging myth. The so-called 4IR is just a construct developed by those promoting a heroic vision of technological scientism, and it ignores the argument that the current rapid expansion of digital tech is merely a product of the existing logic of capitalism.[xvi]
“The Internet has provided access to information for billions, thereby fostering collaboration, connection and sustainable development” (p.62), largely ignoring the fact that it is also a means through which people are increasingly exploited and harmed (although see below).
The Internet “is a global public good that should benefit everyone, everywhere” (p.62), without recognising that the notion of global public goods is frequently used by those companies that can afford it to extract surplus profit and exploit users for their own corporate gain.
“Reaffirming the fundamental commitment to connecting the unconnected”, without acknowledging the rights of people to remain unconnected.
There are, though, importantly also some positive signs of a more nuanced and balanced approach to these issues in Our Common Future, including recognition that
“Currently the potential harms of the digital domain risk overshadowing its benefits” (p.62), although these harms are all too often ignored by those advocating a belief that digital tech is a solution to all the world’s problems, especially those relating to the SDGs.
“Serious and urgent ethical, social and regulatory questions confront us, including… the emergence of large technology companies as geopolitical actors and arbiters of difficult social questions without the responsibilities commensurate with their outsized profits” (pp.62-63). I would agree with this observation, although it is 20 years too late, and the horse has already bolted.
As well as driving the GDC forward, the Office of the Secretary General’s Envoy on Technology has over the last year also developed its nine areas of ongoing work, based largely on the Roadmap, and working with the ITU produced in April 2022 the new set of targets for universal and meaningful connectivity by 2030 referred to above. In June 2022, The UN Secretary General eventually appointed a new Tech Envoy who was none other than the Executive Director and Co-Lead of his High-Level Panel on Digital Cooperation, an Indian diplomat with a recent tech background in AI and lethal autonomous weapons systems.[xvii] Several months later in October 2022 Sweden and Rwanda were appointed as co-facilitators to lead the intergovernmental process on the Global Digital Compact,[xviii] and in January 2023 the process of consultation on the Compact began in earnest.[xix] Informal discussions were held with member states, observes and stakeholders in January and February 2023, and stakeholders have been invited to contribute to the online consultation to be concluded at the end of April 2023.[xx] In parallel, a series of eight thematic “deep dives” are being held between March and June 2023 based on the seven GDC proposal areas and a concluding “dive” on accelerating progress on the SDGs. Great emphasis is being placed on an open and inclusive process.
However, the fundamental problem with the Global Digital Compact is in the way that its consultation process is structured. Although respondents can submit supplementary information, the main survey invites comment specifically on the seven proposal areas or themes, focusing on two aspects: core principles that should be adhered to, and commitment to bring about these principles. The focus on these seven themes is deeply problematic because they do not necessarily represent the most important issues that need to be discussed around the future of digital tech and humanity, and largely reflect the interests of those who shaped the lengthy process giving rise to the compact as described in the section above. The entire structure of the GDC thus mainly serves the interests of ambitious (and/or rich) individuals, organisations and countries, that often have little real understanding of, or care for, the lives of the world’s poorest and most marginalised people. Responses within this framing will thus serve to reinforce the power of those interests rather than changing them fundamentally. Every one of the seven areas listed for comment is presented as a positive assertion, and all could be contested. For example,
Why should internet fragmentation be avoided? Whose interests does this mainly serve?
Why should the focus be on the application of human rights online? Surely this should also be matched by a focus on responsibilities?[xxi]
Whose interests does the notion of digital commons as a global common good really serve? Is it not a mechanism through which the rich can access and exploit something that is claimed as a common good, as with the exploitation of space by satellite companies.
Why is there no thematic question about the environmental impact of digital tech? Digital tech causes immense harm to the environment, alongside the positive benefits that its advocates claim it provides.
Why does the theme around connecting people to the Internet only emphasise education? Surely the seven “basic needs” of air, water, food, shelter, sanitation, touch, sleep and personal space are at least as important, as too more simply are health and security?
Why is there no question focusing on the implications of increasing integration between humans and machines that threatens the very nature of human life?
The example of the way in which the interface between digital tech and education is presented in the GDC agenda mirrors the account thereof in Our Common Agenda which provides a classic example of the ways in which very specific interests coalesce:
“Summit preparations will involve governments, students, teachers and leading United Nations entities, including the United Nations Educational, Scientific and Cultural Organization (UNESCO), the United Nations Children’s Fund (UNICEF) and the International Telecommunication Union (ITU). They will also draw on the private sector and major technology companies, which can contribute to the digital transformation of education systems”.[xxii]
This quotation for example clearly indicates the interest of three UN agencies. It is also aspirational in thinking that it is actually feasible to bring together not only the views of governments but also of students and teachers in any comprehensive, representative and rigorous way. Above all, though, it makes very explicit the positive role of the private sector and especially technology companies. No mention is made of civil society organisations, or other important stakeholders. It represents a vision where the involvement of the private sector is seen as being overwhelmingly positive. It fails to acknowledge that connecting every school will enable private sector companies to expand their markets, to extract huge amounts of data from schoolchildren and teachers to improve their systems, and to increase their profits dramatically.
The growth agenda, innovation and science
Underlying these issues with the GDC is a fundamental problem with UN agendas around international development and the SDGs more widely. This is the belief that economic growth will eliminate poverty. In recent years, this is turn has been supplemented by what I call the “innovation fetish”, whereby governments and UN agencies alike have become beguiled by the idea of innovation, and particularly innovation in the digital tech sector, to deliver on their economic growth ideology.
In essence, most mainstream development agendas over at least the last 25 years have been driven by the obsession that economic growth is the solution to poverty reduction. This is based largely on a conceptualisation of poverty as being absolute, and that economic growth will necessarily reduce or, as is often claimed, eliminate it. However, economic growth raises the potential for relative poverty actually to increase; the rich get richer and the poorest stay where they are, or are even further immiserated.[xxiii] Aligned with the dominant agenda of neo-liberalism, this has encouraged governments across the world to find ways of fostering economic growth driven primarily by the private sector. In the telecommunication sector, for example, this is expressed clearly in the way in which most regulators focus more on the interests of the telecom companies as drivers of growth than they do on equity issues in terms of delivering services to the most marginalised. The innovation fetish that emerged during the 2010s was conceptualised and implemented largely as an accelerator of this trend, bringing renewed vitality to the idea that science and innovation are crucial for increasing economic growth and thus improving human well-being. This applies as much at the national or local scale as it does at the international. The UK’s Department for International Development (DFID) thus produced a new strategy in 2012 for innovation and evidence-based approaches to humanitarian crises,[xxiv] and later in the decade considerably expanded its emphasis on innovation, particularly with respect to digital tech. As DFID’s senior innovation advisor commented in 2019, “We need to acknowledge the increasingly digital world that we live in. It’s not that innovation is synonymous with digital, but it’s making the most of new technologies and the digital economy”.[xxv] Within the UN system, the latter part of the 2010s also saw a dramatic increase in emphasis on innovation, for example through the creation of the UN Innovation Network in 2015. I distinctly remember sitting in a meeting of the HLCP when innovation was being discussed, and almost everyone in the room appeared hugely impressed by it! Perhaps this was in part because the UN leadership was strongly advocating it; perhaps too it was in part because few of them actually understood what was being said. Innovation is inherently associated with good things, even though most innovations fail. Above all, though, it almost inevitably serve the interests of those involved in innovation, especially scientists and the wider system of private sector companies and corporations, particularly in the tech sector.
These interests, full of the optimism of entrepreneurship, have convincingly beguiled governments, civil society organisations and UN agencies more widely that they have the means to solve all of the world’s problems, particularly with respect to economic growth and international development. Yet, all too often they turn out to be solutions in search of a problem, as has classically been the case with blockchain. They are grounded in the widespread belief that “Science” and the dominant current scientific method are not only the best, but also the only way that truth about the world can be conceptualised and expressed. However, while such scientism has proved to be very good at explaining in great detail how things work and how they can be developed, it has led to the creation of a “Science” that does not have the ability to reflect on its own construction.[xxvi] It lacks a moral compass. It is completely unable to address the thought that just because something can be done does not mean that it should be done. With its emphasis on what is (the “positive”) it does not have the ability to address what should be (the “normative). Scientists are fully responsible for the science that they do, both for its potential benefits, but also for its unintended negative consequences. They have a choice. They can serve the interests of global capital, or they can instead address issues of equity and equality, and work to create a fairer and more equal society.
A fundamental problem with the Global Digital Compact is thus that it is based on this flawed belief that trying technically to resolve challenges with detailed aspects of how the digital economy operates effectively will actually improve the life experiences of the majority of the world’s people. The seven issues it raises are all concerned with making the digital tech sector more efficient within a neo-liberal framework, so that the owners and shareholders of private sector companies can extract yet further profit and surplus value as more and more people are enslaved within their virtual worlds. It does not address the fundamental questions about the role of science, about the innovation fetish, about the kind of world that most people want to live in, or the false consciousness that has been woven about the good of science and technology,
The co-option of the UN by digital global capital
The last 25 years have seen the gradual permeation (or subversion) of international discourse within the UN system by global capital. This is nowhere clearer than in discussions and practices around the role of digital tech within international development. Having had the privilege of leading one of the early development partnerships between governments, private sector companies, civil society organisations and international organisations specifically using digital tech to achieve development outcomes, I have long been conscious that some of what we did may have contributed to this process. However, I still consider that we had checks and balances in place to ensure that the ultimate beneficiaries were indeed some of Africa’s poorest and most marginalised children.[xxvii] I also like to believe that most of our partners were well-intentioned and altruistic. Nevertheless, it has been remarkable to think back to the end of last century and compare the relatively low extent to which private sector companies were engaged in and with the UN system then, and the very considerable extent to which they are now involved. As I argue above, the entire process leading to the creation of the Global Digital Compact, and especially the Secretary General’s HLPDC, has been very heavily influenced by the private sector. Indeed, it is possible to suggest that it represents one of the very best examples of the co-option of the UN by global capital.[xxviii]
There are at least six main reasons why private sector digital tech companies have become so influential within the UN system:
The UN has insufficient funds to fulfil its ambitions, and is therefore eager to attract external sources of funding for its work, either through donations or partnerships.
Telecommunication companies have been involved in international agencies such as the ITU and the CTO since their foundations in the 19th and early 20th centuries. Close relationships between companies and governments were central to the emergence and growth of the sector, and international agreements were necessary to enable efficient communication between different parts of the world.[xxix]
Most UN agencies do not have the relevant technical and scientific expertise possessed by the private sector to be able sufficiently to understand the creation and use of digital tech to develop appropriate policy guidance and programme implementation.
Digital tech companies feature very prominently in driving forward the economic growth agenda that the UN system has deemed essential for delivering the SDGs.[xxx]
Digital tech has also been pitched by these companies as a highly effective technical solution to many of the most pressing issues facing humankind.
These companies, driven by an apparently inexhaustible desire to expand their markets and develop new ways to extract ever greater surplus value, have identified UN agencies and the Secretariat as a perfect vehicle for achieving these ambitions.
However, In a prescient paper published in 2007, Jens Martens identified eight important risks and negative side effects associated with partnerships between the UN and the private sector:[xxxi]
Growing influence of the business sector in the political discourse and agenda setting.
Risks to reputation: choosing the wrong partner
Distorting competition and the pretence of representativeness
Proliferation of partnership initiatives and fragmentation of global governance
Unstable financing – a threat to the sufficient provision of public goods
Selectivity in partnerships – governance gaps remain
Trends toward elite models of global governance – weakening of representative democracy
All of these have come to pass to a greater or lesser extent. There is no excuse for anyone in the UN not to have been aware of them. The leadership of the UN has therefore been complicit in this process whereby global governance has been co-opted by the private sector. Many might have done so in the belief that this was the only way to deliver the MDGs and the SDGs, but these agendas have failed.
This is not to say that the private sector cannot contribute hugely to international development, and that close relationships between governments and the private sector are not essential for the development of wise policies and practices especially relating to the creation and use of digital tech. However, it is to argue that the balance of power and influence has shifted far too far towards the tech companies and global corporations, whose fundamental interest is to make profits for their owners, staff and shareholders. Companies go bust if they cannot make profits. This is fine, but using digital tech to serve the interests of the poor can never be led by the profit motive. There needs to be a fundamental realignment towards wise government and a streamlined UN system[xxxii] so that the profit-focused drive to rapid economic growth and expansion can be moderated by citizen-focused policies and practices in the interests of all. To be fair, Our Common Agenda does indeed briefly emphasise a commitment to renewing the social contract between governments and their people, and to using measures other than GDP to measure development outcomes, but it is extremely unclear how these ambitions will be delivered, and as long as the private sector (and economists!) retain their power within the UN system this seems unlikely to change substantially in the near future.
A final point that also needs to be made is that although some of the intended outcomes of the GDC may be desirable for many stakeholders, they will be very complex to deliver, and there is little evidence that the UN Secretary General or the Office of his Envoy on Technology have the capacity or support to be able to deliver them sufficiently comprehensively and rigorously in the time scale envisaged. The Summit of the Future is only 17 months away. The Russian invasion of Ukraine is still continuing, tensions between the USA and both China and Russia are increasing, and new political configurations are emerging in the eastern Mediterranean and South-West Asia. This makes it extremely difficult to imagine global agreement on the issues that the GDC aspires to address. Moreover, discussions on subjects such as whether we should have multiple Internets or a single global internet, how to ensure good ethical use of new technologies such as AI, or how to get the balance right over digital privacy concerns have been ongoing for many years and involve fairly intractable positions. Now does not seem to be a good time to try to resolve them.
Constructive alternatives: a ten point plan
As mentioned earlier, I am surprised that so many people and organisation seem to be signing up to the UN Secretary General’s Global Digital Compact Agenda (or at least the agenda that staff in the UN Secretariat have given him to front up), especially when so many conversations I have had in private with individuals in government, the private sector, civil society and various parts of the UN over the last year seem to consider it to be deeply problematic. Clearly, part of the agenda for UN agencies is that they need to be seen to be being supportive of the Secretary General, and this is entirely understandable, especially when they have strong interests in the outcomes. However, national governments, companies, and civil society organisations can indeed opt out. If, as I surmise, the GDC process is not going to produce anything new or of value – it simply cannot do so in the time available – then there is little to lose by not participating. To be sure, there is a natural fear of being left out of the decision making process (but most of the world’s population is already left out), and of not being able to influence something that could perhaps have some value, but if enough entities indeed choose not to contribute then this would not only be a reflection of what they really think about the process, but it would help to ensure that it cannot be seen to have legitimacy as a representation of global opinion.
It is easy to be critical, but much harder to implement wise policies and practices. To conclude constructively, though, I offer the following as an alternative set of propositions about how we can move towards a more substantial and sustainable future for global deliberations around the future of digital tech:
First, it is much better to try to do a few things well, than to fail in trying to do too much. Few of the 169 SDG targets and 232 unique indicators,[xxxiii] for example, seem likely to be achieved by 2030, not least because there are just too many for them to be realistically addressed.[xxxiv] Likewise, the recently agreed digital targets[xxxv] already seem to be unachievable; it is no excuse that they are merely called “aspirational targets”. Instead we need to identify two or three of the most important issues relating to digital tech, and ensure that they are appropriately considered, that binding wise agreements are reached about them, and that practices are implemented to deliver on them.
Second, for me, the most important issue is how to achieve equity in the impact of digital tech, so that rather than increasing inequalities digital tech can be appropriately used by the poorest and most marginalised to enhance their lives. My views on this have changed little since I helped to draft the paper on the role of ICTs in the post-2015 development agenda agreed by the CTO’s members in 2014. Yet the untied world community has made little headway over the last decade in achieving this.
Third, there are enormous chasms of trust between governments in different parts of the world, between governments and UN agencies, and between UN agencies (including the UN Secretariat) themselves.[xxxvi] One way in which this can be reduced is to begin with areas where agreement is most likely to be achieved, and then move on to more intractable areas. The example most often given about an area of common agreement concerning digital tech is on the harms caused by child online pornography. Yet despite numerous global initiatives, and the work of individual organisations such as the Internet Watch Foundation,[xxxvii] the scale of this problem seems to have become worse rather than better. If we cannot make progress on this small area of deep concern, how can the UN Secretary General’s ambitious GDC be expected to have an impact.
Fourth, it needs to be realised that some of the most difficult issues around the future of digital tech require many long discussions held privately and confidentially between the most powerful global players, be they governments or corporations.[xxxviii] People of good will – and they exist in most governments and companies that I have worked with – must be given the time and space to build trust, and work collaboratively to achieve outcomes in the interest of us all. It might be that these need to take place between representatives of the leadership of regional groupings of states rather than trying to reach agreement between every state within the UN. However, realistically, it is the most powerful players who will have to commit to resolving these issues in the interests of all.
Fifth, those engaged in these global deliberations around the future of digital tech need to be realistic rather than idealistic. There is far too much posturing and over-ambitious rhetoric in much of the present work of the UN Secretary General and those working most closely with him on this issue. Naïve gestures help no-one, least alone the world’s poorest and most marginalised people.
Sixth, those involved in these discussions must stop trying to reinvent the wheel, and instead learn from the wealth of existing knowledge that has been built up in the 20 years since the first gathering of the World Summit on the Information Society held in Geneva. The ongoing GDC consultation is highly unlikely to add anything new, and what matters most is the process through which agreement can be gained on what needs to be done collectively to address the future of the machine-human interface.
Seventh it is crucial that we abandon the naïve belief in technological determinism that dominates so much rhetoric and practice in the GDC discourse. Digital tech is not a solution to the world’s problems, but their use is often the cause of many of them. It is essential to shift the balance of discussion to one which recognises that the design, construction and use of digital tech serves very specific interests, and that they cause both negative harms and positive benefits. Emphasis needs to be on identifying and mitigating the harms so that the benefits can be enjoyed by all.
Eighth, there needs to be a fundamental restructuring of the UN system, so that its decisions are informed by, but less influenced by, the private sector.[xxxix] As this paper has suggested, the GDC process is part of the problem not its solution.
Ninth, rather than centralising control of the digital dialogue within the central UN Secretariat, and a specific office for a Tech Envoy,[xl] it would seem to make far more sense to situate discussion and debate within and through existing UN mechanisms and agencies that have very real and well established expertise.[xli] This would require resourcing them appropriately to deliver sensible outcomes. Surely the CEB and HLCP, with appropriate resourcing, could have been tasked with taking this agenda forward? After all, the HLCP was established to be responsible to the CEB specifically “for fostering coherence, cooperation and coordination on the programme dimensions of strategic issues facing the United Nations system”.[xlii] Furthermore, the UN should seek to reduce the plethora of its events and conferences around digital tech, to reduce the very considerable overlap and duplication of effort.
Finally, everyone involved in these processes needs to place much more evidence on learning from the past rather than failing through adherence to the innovation fetish. There is a vast wealth of collective knowledge about the interface between technology and human society, and increasing amounts of relevant research are being produced at an ever increasing pace. All we really need is the will actually to do something wise about it, in the interests of the many rather than the few.
[i] Throughout this piece, I have deliberately avoided naming individuals partly because I am more concerned in the structural aspects of the processes surrounding the emergence of the Global Digital Compact, but also because some of what I write is conjecture and I do not want to appear in any way to be criticising the actions of individuals, some of whom remain good friends.
[ii] Interestingly, the remit and role of the Chief Information Technology Officer today is summarised as follows on the OICT site: “All Secretariat entities report to Mr. Bernardo Mariano Jr., Chief Information Technology Officer, Assistant Secretary-General, on issues relating to all ICT-related activities, resource management, standards, security, architecture, policies, and guidance. The Office is headquartered in New York City”.
[x] I cannot help but wonder how many of the panel had attended the original WSIS Summit Meetings in Geneva and Tunis, or had followed the existing processes noted earlier in this paper.
[xi] See https://www.un.org/techenvoy/content/about: “The United Nations Secretary-General’s Roadmap for Digital Cooperation responds to the report of the High-Level Panel, setting out the Secretary-General’s vision and noting that ”the United Nations is ready to serve as a platform for multi-stakeholder policy dialogue on…emerging technologies”.”
[xv] Our Common Agenda, p.3 https://www.un.org/en/content/common-agenda-report/assets/pdf/Common_Agenda_Report_English.pdf. Note my strong belief that the failure of the SDGs was built into their creation, and that they have significantly harmed the lives of the world’s poorest and most marginalised by their emphasis on economic growth rather than equality and equity. To be more positive, Our Common Agenda does address some of these issues, and to that extent its commitment to renewing the social contract between governments and their people, and to using measures other than GDP to measure development outcomes are to be welcomed.
[xxvi] See Unwin, T. (1992) The Place of Geography, Longman which draws heavily on the work of the German social theorist Jürgen Habermas, and especially his books Theory and Practice and Knowledge and Human Interests (English translation titles).
[xxxi] Martens, J. (2007). Multistakeholder partnerships: Future models of multilateralism? Berlin, Germany: Friedrich Ebert Stiftung; see also Unwin, T. (2005) Partnerships in Development Practice: Evidence from Multi-Stakeholder ICT4D Partnership Practice in Africa, Paris: UNESCO for the World Summit on the Information Society (93 pp.)
[xxxvi] But one indication of the moribund state of the UN is the observation that the Presidency of the UN Security Council is currently held by a country that has invaded another sovereign state and in so doing has committed heinous atrocities at a scale not often witnessed in recent years.
[xxxviii] Note the wording here, focusing on “powerful” rather than “important”. We need to recognise existing power structures, and work within them while at the same time trying to change them for the better.
[xl] The Tech Envoy, Amandeep Singh Gill’s personal background is primarily as an Indian diplomat (having joined the Indian Foreign Service in 1992, and serving thrice at headquarters in New Delhi in the Disarmament and International Security Affairs Division, 1998-2001, 2006-2010 and 2013-2016; https://www.crunchbase.com/person/amandeep-singh-gill). Although his bio on the Office of the Secretary-General’s Envy on Technology says that he is “A thought leader on digital technology” (https://www.un.org/techenvoy/content/about), the experience he has in this field is primarily in digital health and AI, alongside his interests in nuclear disarmament. His role as Project Director and CEO of I-DAIR only began in 2021, and built on his work as one of the two co-leads of the HLPDC process (2018-19).
[xli] In the interests of transparency, it would be useful to know how much the UN Secretary General’s entire digital exploration has cost, and how this money might have been spent better to achieve more desirable outcomes..
Note: The UN SG’s new publication “Our Common Agenda Policy Brief 5 A Global Digital Compact — an Open, Free and Secure Digital Future for All” was published in May 2023 and is available at https://www.un.org/…/our-common-agenda-policy-brief… – much of the content is deeply worrying (for the reasons outlined above) – and indeed some of it harmful to the interests of the world’s poorest and most marginalised.
I have long been troubled by the widely accepted and increasingly used terms Global South and Global North.[i] Those who wish to use them for political purposes or to highlight the factors that they claim cause inequalities across the world will of course continue doing so, but there are at least six main reasons why I find it a misleading and problematic choice of terminology. I list these below just to help explain why I don’t uses these terms, and I hope my comments may also encourage others to do likewise.
Above all, the use of such terminology implies some kind of spatial causality, usually around the idea of the North exploiting the South in the present and/or the past. This strikes me as being surprisingly similar to the now widely discredited notion of environmental determinism, advocated by the likes of Ellsworth Huntington and Ellen Churchill Semple in the late 19th and early 20th centuries (for a wider discussion, see my The Place of Geography, 1992). There is not something universal about living in the North (whatever that means), or about the North itself that makes it inherently more powerful and dominant than the South.[ii]
I remain confused about why the word “Global” is at all necessary. What does it add? In 1980, the Brandt Report entitled North-South: a Programme for Survival, managed to convey very similar meaning, but much more succinctly,[iii] and indeed also drew a much more nuanced wavy line between the two regions. To be sure, there are those who want to use the term global to represent some kind of global solidarity, especially in the South, but this is more aspirational than real (see also comments on relative usage of the terms below)
In an absolute global sense, the geographical north is the northern hemisphere, and the south the southern hemisphere. Yet, there are problems with such usage to refer to per capita economic wealth and human well-being. It is often forgotten that the South Asian countries of Bangladesh, India, Pakistan and Sri Lanka, for example, are all in the northern hemisphere. Likewise, many more African countries are in the northern hemisphere than are in the southern.[iv] The rich countries of Australia and New Zealand are in contrast in the southern hemisphere. There is also much economic poverty in the northern hemisphere and much richness in the southern. If large absolute regions are being considered it is in some ways more accurate to consider the Tropics (between the Tropic of Cancer and the Tropic of Capricorn) as being economically poorer/more exploited than either of the areas to the north and the south. Such suggestions, though are dangerously close once again to falling down the slippery slope of environmental determinism.
North and South can also, though, be interpreted in a relative sense. Given that only 10-12% of the world’s population actually lives in the Southern Hemisphere, this relative approach is certainly a more realistic one to trying to grapple with the differences between states. It is nevertheless also problematic as a framework for explaining wealth differences (or indeed most other differences). Countries or regions further north are sometimes poorer in per capita wealth then those further south and vice versa. Canada’s per capita income is less than that of the USA; Mozambique and Angola are poorer than South Africa. In the UK, the widely used term North-South divide actually refers to a poorer northern region and a richer southern one.
I’m afraid that the argument that I sometimes hear that the use of the terms is only an approximation and simplification and it doesn’t really matter if they are inaccurate holds no water with me. Using such terms reinforces inaccurate understandings of cartography and geodesy, and supports looseness of meaning and language. I wonder how many people, for example consider that India is in the Global South, and thus think it is also in the southern hemisphere? Moreover, all too frequently we read or hear comments such as “The Global North generally correlates with the Western world”.[v] If that is the case, surely “Western” would be a better term to use than Northern. But we need to remember then that everywhere Western is west of some East.
A significant problem is therefore that seeking to carve the world up into binary divisions is overly simplistic and usually harmful, for all but those who persist in using or imposing them. There are enormous differences between the continents and countries within both the so-called Global South and the Global North, and it is this rich diversity that we must cherish in multi-layered ways and understandings. Those who seek to impose an ill-fitting binary distinction generally do so in their own interests. Sometimes this is for the sake of simplicity, but as the above brief comments highlight such simplicity can be very misleading. At other times it has just become a lazy shorthand. As that well known “source of all knowledge” tells us “The Global South is a term generally used to identify countries in the regions of Latin America, Africa, Asia and Oceania”.[vi] Well, why not instead just use the actual geographical names Latin America, Africa, Asia and Oceania? This source goes on to comment that “Most of humanity resides in the Global South”.[vii] It is interesting to ponder what this actually means. As noted above this is certainly not true if South here is referring to the Southern Hemisphere.
In brief, this is a call for meaning, clarity and precision. If we mean that techno-capitalism domiciled in the states of the USA, Canada, the countries of Europe, the Gulf and Australia/New Zealand increasingly controls and exploits the rest of the world then let’s say so rather than couching our language in a mealy mouthed meaningless “geographical” distinction between North and South. But even this is an over-simplification of a different kind. What about China and indeed Russia? Those who really believe that there is something about being “Northern” that makes people dominant, aggressive and exploitative, and something about being “Southern” that makes them ripe for exploitation, believe on. But such dreams will not improve the lives of the world’s poorest and most marginalized wherever they are found. It is indeed a great disservice to the many rich indigenous cultures, traditions, livelihoods, and social formations to be found in Latin America, the Caribbean, Africa, Asia and the Pacific. We must always ask ourselves in whose interest words are used. Who benefits most from the use of the terms Global North and Global South?
[i] Apparently first used by Carl Oglesby in 1969 in “Vietnamism has failed … The revolution can only be mauled, not defeated”. Commonweal, 90.
[ii] Despite this notion having been long discredited, I do think it is time that the environmental factors influencing human behaviour are revisited in a more sensitive and sensible way by geographers. The influence of day and night length variations on cultural behaviours in high latitudes is, for example, a fascinating topic of enquiry.
[iii] Two words rather than four; Brandt, W. (1980) North-South: a Programme for Wurvival; Report of the Independent Commission on International Development Issues, Cambridge, Mass.: MIT Press.
[iv] The equator runs through southern Somalia, Kenya, Uganda, the Congos, and Gabon.
to emphasise the long and diverse history of slavery across the world, and to highlight its differing historical expressions and complexities;
to recognise that we cannot change the past nor know the future with certainty, and can only act in the immediacy of the present; and
above all, in the light of the above, to encourage us all to do much more now to eliminate the scourge of modern slavery.
It is easy to say or write that slavery is fundamentally wrong because of the loss of freedoms and violence usually[ii] associated with it. It is far more difficult, though, actually to do something constructive about eliminating slavery at the only time over which we have any control, the present.
The Black Lives Matter and associated anti-slavery protests in the UK in 2020 raised many questions (see image above). I was particularly challenged, for example, by the emphasis of those protesting on the past rather than on contemporary slavery. The majority of banners likewise seemed to highlight the wrongs of past slavery more than they did the wrongs of present slavery. My reflections here seek to grapple with why this was, and why it remains so.[iii] In the years since, there has been much more visible concern in Britain over reparations for past slavery, especially relating to the 18th and 19th centuries, than there has been real action to eliminate contemporary slavery: statues of people who had once been slave-owners have been torn down; streets have been renamed; universities, such as Manchester and Cambridge that have benefitted from donations from people who gained from the slave trade have undertaken enslavement inquiries; and institutions such as the National Trust have published reports on their links with historic slavery.
In part this is because of the overlapping interests between the Black Lives Matter movement and those protesting against slavery.[iv] However, slavery matters in its own right; it is not just a racial matter. In this piece I therefore seek to disentangle the issues of slavery and racism.[v] I want to focus primarily on slavery rather than race. I fully recognise that the two are often intertwined, and there are good reasons why people feel strongly about this intersection, but here I focus on broader issues relating specifically to slavery, and how we respond to the past. I begin with some personal reflections on the origins of my own interest in slavery, and then provide a short conceptual framework that includes a note on definitions of slavery, before highlighting what I see as some of the most difficult and problematic issues concerning slavery past, present and future. My purpose is to encourage us to shift our focus from the past about which we can change nothing, to the present where we do have the option to do something.
My interests in slavery
I have long been interested in slavery, from my days as a boy reading the Bible about the unfairness of Joseph being sold into slavery (Genesis 37) and my difficulty in trying to reconcile my own emerging moral views about slavery with some of Paul’s comments on slaves being obedient to their masters (Ephesians 6, Colossians 3, 1 Timothy 6, and Titus 2). However, I have taken a much more serious and academic interest in slavery since the mid-1970s. Three factors have been particularly important in helping to shape my current understanding of these issues.
First, my doctoral thesis in historical geography written in the second half of the 1970s focused in large part on the changing economic and social structures of medieval Midland England. I was fascinated to learn that slaves could sometimes have had better lifestyles than villeins within feudal society. In this I was heavily influenced by the writings of Marc Bloch (both his seminal La Société Féodale first published in 1939, but also in essays that have recently been collated under the title Slavery and Serfdom in the Middle Ages) and in the historical records with which I was working.
Second, some 20 years ago I encountered modern slavery in England for the first time as I sought to support someone who was trying to rescue a person who had been forced into slavery on their arrival to work in our country. This opened my eyes to the widespread existence of modern slavery in many parts of the UK, and it continues to haunt me as I continue to see such slavery within the country that I call home.
Third, my experiences working in Africa during the last 20 years have inevitably forced me to confront issues of colonial history and slavery, especially in Sierra Leone and Ghana. Despite its fraught history both as a Crown Colony until 1961 and then as an independent state since, Freetown and Sierra Leone always cause me to think about the potential for freedom in the human mind and the abolition of slavery;[vi] it is also salutary to recall that it is the home of Fourah Bay College which was founded in 1827 as the first western style university built in Sub-Saharan Africa.[vii] I like to think that there is a connection between freedom and knowledge.
Likewise, I have many fond memories of working in Ghana. A visit to Cape Coast Castle in 2008, though, remains etched in my mind because of one very specific conversation that I had there while visiting the Castle and Dungeon. Initially the castle had been established as a small fort by the Swedish Africa Company in the middle of the 17th century, and it later became one of the most important “slave castles” along the former Gold Coast. Watching a group of European women who were very upset by what they saw, one of my close Ghanaian friends commented that he never quite understood why many Europeans became so emotionally distressed when visiting the castle. I was initially perplexed, but he went on to say that, after all, it was the African people living in the surrounding areas who had sold their awkward cousins and uncles, or people captured in conflicts as slaves to the Europeans in return for guns and other items that they wanted. Slavery had long been a way of life in the region, and had most definitely not been introduced by the Europeans. His matter of fact comments challenged much of what I had previously rather taken for granted about the Triangular trans-Atlantic slave trade.[viii] This trade was undoubtedly coercive, violent and exploitative, but its transactional character and the collaboration of African communities who were willing to sell other Africans for a price to European slavers needs to be recognised in any discussion of this particular expression of slavery.[ix]
Cape Coast Castle, 2008 (as rebuilt by the British in the 18th century)
On concepts and definitions
I have long enjoyed reading Onora O’Neill’s inspirational philosophical writings (see especially the collection of essays published as Justice Across Boundaries, 2016), and have found that many of my own ideas coincide quite closely with hers, especially around obligations, rights and justice (although I have tended to focus on the notion of “responsibilities” rather than “obligations”). In particular, she highlights the difficulties that arise in discussing the rights to compensation for actions in the distant past that are widely considered to be wrong today. Her work is well worth reading at length on this topic; I frequently return to it for clarity on these difficult issues. What follows is in part sparked by reflections on slavery in the contexts of these wider philosophical and conceptual debates. Three challenges seem particularly important.
First, no individual has any effective power over what her or his distant ancestors did in the past. If they have no power to change the past, what are their responsibilities? We might have had some influence on our own parents’ actions, and those who have known their grandparents might also have had a little influence on their lives. However, we cannot have had any actual influence on the lives and actions of those we never knew. If we have had no such influence, can we have any responsibility for their actions in the past? If we have no responsibility for those actions, why should we be criticised and condemned by others for the actions of our ancestors (individually and collectively)? These are real challenges in the context of slavery. It is not easy to clarify the logical reasons why the descendants of slave owners (and institutions they benefitted) should have received the opprobrium that has been cast on them by many of those today condemning slavery. This is regardless of how one might “judge” (itself a very problematic notion) those who were children of slave owners, but who argued vehemently for abolition in the 18th and 19th centuries, or even those who had owned slaves but then championed abolition.[x] Even John Locke, widely seen as being one of the founders of liberal democracy, has recently been savaged by historians and others because of his role in administering the British colonies in North America in the 17th century where slavery was widely practised.[xi]
Second, there are profound difficulties in “judging” the past by the standards of the present. As Hartley wrote in The Go-Between (1953), “The past is a foreign country: they do things differently there”. All societies evolve and change, but they all have mechanisms through which the few rich and/or privileged extract a surplus from the many poor and exploited (Karl Marx’s modes of production remain a powerful theoretical model of such change; for Marx and Engels, slave society was the earliest form of class society). There are, though, many conundrums within the idea of “criticising” past societies, not least because our present societies have emerged from them, and would be different if they had not existed. There is nothing we can do about changing past societies. Hopefully our present societies have evolved positively and are better than those of the past, although this is by no means always so! The key thing is that we need to learn the lessons of history; we need to understand the past so that we do not make the same mistakes our ancestors made then and there (at least as “judged” by our own societies). “Now” is the only time when we can actually do anything, and the choices we make in the present need to be made in the light of the past so as to help make a better future. As Tolstoy (1903) wrote in his short essay Three Questions, “Remember then: there is only one time that is important – now! It is the most important time because it is the only time when we have any power”. Such reflections also force us to consider how future generations will perceive our own actions. How, for example, will they consider our ineffectual efforts to abolish modern slavery? Might they see our enforced addiction to digital tech as but another, les immediately brutal, form of slavery, and today’s digital barons as equivalent to the slave masters of the past?
Third, these considerations also make it important to try to define what exactly slavery is. It is, though, very problematic to provide a clear and all-encompassing definition of slavery, not least because of the ways in which the notion and practices have varied and evolved over time (and may continue to do so in the future). Two key elements are central to any definition: a lack of “freedom”, and being under the absolute control of another person. Exactly what types of freedom and control are necessary to be considered as slavery are disputed and have changed over time. One way of addressing this is to define certain practices as being indicative of slavery, as with chattel slavery (treating someone as the personal property of another), bonded labour (where someone pledges themselves to work for another to pay off a debt), or forced labour or marriage (where someone is forced in some way to work or marry against their will). Another approach has been to adopt legal definitions agreed by conventions. The 1926 UN Slavery Convention, thus defines slavery as ”the status or condition of a person over whom any or all of the powers attaching the right of ownership are exercised”. In practice, it may be best to consider a spectrum of characteristics that comprise slavery, recognising that different people may choose to include some or all of these in their definitions. “Servitude” is thus considered by some to have many of the characteristics of, but to be less severe than, “slavery”. The European Court of Human Rights (2022), for example, has recently argued that servitude “is a particularly serious form of denial of freedom”, although it should be considered as an aggravated form of forced labour, and therefore although related to slavery it is not to be confused with it. “It includes, in addition to the obligation to provide certain services to another, the obligation on the “serf” to live on the other’s property and the impossibility of changing his status”.[xii] The relationship between “slavery” and “serfdom” has, though, also evolved over time. In origin, the words “serf” and “slave” come from the same root, namely the Latin servus (meaning slave; and from which the word servitude is also derived). However, serfs and slaves have generally been seen, at least from medieval times onwards, to be rather different categories. For some, the word “serfs” is a generic term to describe the group of people originally known as coloni, or tenant farmers in the late Roman period onwards, and whose status had generally become increasingly degraded. For others, it is even broader, and is often equated with the word “peasants” to refer to the mass of people at the bottom of the emerging class system in medieval and early-modern times, but above the status of slaves.[xiii]
These three conceptual framings underlie the ensuing sections on slavery in the past, in the present and in the future.
Four important observations about past slavery are all too frequently ignored or downplayed in contemporary public discourse, but I suggest should be considered in any reasoned discussion of slavery:
First, slavery was a normal and accepted aspect of society in many parts of the world for well over six millennia, whereas the abolitionist movement in Europe only really began in the mid-18th century, less than three centuries ago.[xiv] It must have been as unthinkable for the majority of people for most of history (and indeed pre-history) to have challenged slavery as it is now for someone to try to promote slavery.
Second, slavery was practised at some time in the past in most parts of the world. Slavery existed in most ancient civilizations as in the Babylonian and Persian Empires. It was common throughout the Roman world; slaves from what is now the UK were paraded in Rome. In the early Islamic states in West and North Africa it has been estimated that about one-third of the population were slaves; in East Africa, Zanzibar was the main port for slave trading to the Arabian peninsula. Slavery was widely practised in the Pre-Columbian cultures of Middle and South America. It formed a crucial element of the Ottoman Empire; in the 17th century it is estimated that a fifth of the population of Constantinople was probably slaves. Slaves remained fundamentally important throughout the Ottoman Empire until the 19th century, notably as the much feared Janissaries (elite infantry soldiers). Slavery was widespread for centuries in China, and was only abolished in 1909. The Triangular trade between Europe, Western Africa and North America, which features so prominently in current popular discourse on slavery was thus only one example of the very widespread pattern of global slavery. It is often forgotten that between the 15th and 18th centuries white Europeans from Italy, Spain, Portugal, France and England had also been sold into slavery by North Africans. Frequently slaves were captured as a result of warfare, sometimes there were regular expeditions to capture slaves, and often people sold themselves into slavery to pay off debts. This ubiquitous character of slavery raises interesting questions about the payment of reparations. Should Italy pay England for taking slaves during the period of Roman occupation? Should Turkey pay countries in the Balkans for the devşirme (blood tax) through which Christian boys were taken to become Janissaires? Should the rulers of states in the Arabian peninsula pay reparations to the countries of eastern Africa? Should Israel pay reparations to the surrounding countries from whence their ancestors took Canaanite slaves? The usual response to such questions is “No”, on the grounds that such reparations only apply to the recent past. But when is the past recent?[xv]
Third, it must be recognised that everyone in societies where slave ownership was practised benefitted to some extent from slavery, and it is not possible just to attribute blame to slave owners or traders and their descendants.[xvi] The butcher, the baker and the candlestick maker all benefitted from the wealth gained by those who invested in estates that used slave labour. All societies, past and present, have mechanisms and legitimation systems through which the rich can exploit the poor, and can thereby afford to live “better” lives and purchase luxuries. Slavery is just one mechanism through which such surplus extraction and exploitation occurs. Indeed, life for the poor in 18th and 19th century Britain was unbelievably harsh by modern standards. However, everyone (apart from the slaves) takes a share of the trickle-down financial benefit. The elite pay architects, artists and jewellers to produce what many societies now cherish as their cultural heritage, but this enabled these craftsmen to afford to buy paints, or beer, or clothing, which in turn benefitted the brewers, merchants and clothiers. Ultimately, almost everyone in the past, and not just slave owners or institutions that received gifts derived from slave ownership, benefitted in some way from slavery. It therefore seems highly problematic to pick out certain slave owners or institutions (and their descendants) in certain societies for retribution.
Fourth, it is likely that in most cases slavery did not generally collapse purely for moral grounds, but rather also for economic ones. The ultimate reason that slavery collapsed was often because it became too expensive to obtain and maintain slaves. We like to think that it resulted exclusively from some kind of enlightened belief, or a rise of moral virtue in the 19th century, and this may indeed have helped in some cases (as with the abolitionist movement in Britain), but there is little evidence to support the argument that a sudden rise in moral concern was usually the primary reason that slavery ended. As conflicts and wars reduced in frequency, it became less easy to capture people and enslave them. Moreover, the costs of feeding slaves could become prohibitive, especially at times of rising basic staple prices. Forcing slaves to cultivate land to feed themselves was also problematic since it took land and labour away from other forms of production, and yields were in any case often not high. Most importantly, new more efficient forms of labour exploitation (such as the factory system in the 19th century) and the mechanisation of agriculture, reduced the economic benefits of slave production.
Slavery: the present
As noted in the quotation from Tolstoy cited above, the present is a very special time, because it is the only time when we have any power. How we act in the present, though, depends very much on our understanding of the past. Four problematic issues seem worthy of reflection here about how we are acting in the present with respect to slavery.
First, it must be recognised and acknowledged that slavery still exists. It was not eliminated by the abolutionist movement in the 19th century. According to the latest Global Estimates of Modern Slavery, there are about 49.6 million people living in modern slavery, mostly in forced labour and forced marriage.[xvii] Roughly a quarter of these are children. To be sure, definitions of slavery have changed over time, but these figures compare with best estimates for the number of slaves transported from Africa to the Americas of around 12.5 million.[xviii] Modern slavery is real and present at a very large scale. We can choose to do something real and practical about it. It is as violent and horrendous as are most forms of past slavery. While much current media attention and political activity focuses on black slavery, colonialism and issues around restitution and reparations, we also need to focus on the reality of modern slavery across the world and do something to bring it to an end.
Second, the timing of the sudden upswelling of interest in slavery, the recent actions taken by many people and organisations to try to atone for the past, and the vehemence of commitment of many of those campaigning for reparations and against past slavery seem in part to represent a collective failure to understand and appreciate the impact of slavery, both in the past and at present. Having learnt about slavery as a child, and written and taught about slavery through much of my career,[xix] I find it hard to believe that so many people in Britain seem to have been unaware of the impact of slavery on our economy.[xx] Why did they not protest before 2020? The apparent sudden discovery of our role in the Triangular Trade, seems in part to reflect a failure in our education system to address the complexity of history, and especially to consider slavery in a global and holistic framework. In a society increasingly dominated by scientism (science’s belief in itself) it becomes more and more important for young people to study the disciplines of history and geography which play such a crucial role in shaping their sense of time and place. A good historical understanding of slavery throughout history and across the world would also help people have a much more nuanced and sensitive approach to understanding its complexities, and the reasons why we need to respond urgently to the continued existence of modern slavery.
Third, it is always easier to criticise people who cannot respond, especially in the past, than it is to act wisely in the present. As any political leader knows, it is much easier to criticise others, than it is actually to deliver policies that have positive outcomes. In the context of slavery, it is easy to stand up and protest, it is easy to adopt slick slogans, it is easy to blame people in the past, and it is easy to post critical comments on social media. This is especially so when those who lived through those times are completely unable to respond or tell their side of the story. It is very much more difficult to change existing practices, such as modern slavery, because that takes considerable time and effort, it is tough to do, it is expensive, and it is not easy to understand what really needs to be done. However, given now is the only time when we can influence things for the better, we should surely concentrate on what we can actually do something about, rather than spend so much time bemoaning something that we can never change. We can learn from the past to change the present.
Fourth, it is difficult to justify criticising people in the past, because we were not there and have no way of knowing how we would have behaved ourselves at that time. We might like to think that we would have acted in the past in accordance with our present moral compasses (if we recognise that we have such things), but the reality is that it is highly unlikely that we would have done so. We simply have no real way of knowing what we would have done if we had been living during past epochs when slavery was rife. Perhaps our biggest fear would have been the chance of being captured and sold into slavery ourselves. If we cannot guarantee that we would have opposed slavery then, it seems difficult to justify the opprobrium that we cast on those who benefitted from slavery in the past, especially if we are doing little to prevent it in the present.
In short, the logic of the above comments seems to point to a conclusion that we should focus our attention more on trying to stop modern slavery, because we can indeed do something about this, rather than spending most of our time criticising the actions of people in the past about which we can do nothing.
Slavery: the future
Such arguments have interesting implications when slavery in the future is considered. Again, four comments seem appropriate.
First, we might be able to reduce the extent of slavery in the future if we take action to do so now, and at the very least those who do indeed believe that slavery is wrong would then be acting according to their moral principles. This in itself raises many further difficult issues. Given that slavery still exists, and has therefore probably done so ever since human “civilizations” first emerged, is it somehow a “natural” human condition? Will slavery always exist? Even if this is the case, though, those of us who believe it is wrong can nevertheless still seek to take action now to reduce its extent in the hope that this will happen in the future.
Second, how will those in the future look back and see our actions today with respect to slavery? Just as we cannot influence the past, we will not be living when those in the future think about us. At one level, this question will not really matter, because we will be long dead and the thoughts of people in the distant future can have no real influence over us. Nevertheless, many people do wish to be remembered kindly. For those who do care how history will see them, if only the near history of their children and grandchildren, taking action now at a time over which we do have some control or power, would seem to be wise (although of course many people may not wish to be wise). How will our offspring and descendants judge us most positively: for acting to reduce the slavery that does exist and we can do something about, or for merely protesting about a past over which we could never do anything to change.
Third, if we do nothing about slavery today, there is a chance that those nearest and dearest to us might be forced into slavery in the future. This may be an unlikely scenario for many reading this post, but it is at least a logical possibility. Every one of the nearly 50 million people currently in slavery has parents, and possibly grandparents who may still be alive and know them. At least some, perhaps most, of these relatives will grieve that their offspring are enslaved. By acting today, we can reduce the chances of our children and further descendants becoming enslaved.
Finally, it is worth asking what future generations may consider about the nature of freedom and slavery in our societies today? I have recently spent much time pondering this question, and writing and speaking about digital enslavement as a new mode of production. Put simply, if we cannot live without using digital tech, have we become enslaved by the owners of the companies and governments who force us to use such technologies? If we cannot spend a day, let alone a week, without using digital tech, have we not become enslaved by those who make it?[xxi] Have we not willingly become “unfree”? The new slave masters expropriate a vast surplus from our data and everything that they know about us, and we seem unable to escape from giving this to them at no charge. Indeed, we have to pay significant amounts to be connected to the internet, just so as to enable them to exploit us further. What will future generations think? Will the likes of Bill Gates, Elon Musk, Mark Zuckerberg, Larry Page, Sergey Brin, and Jeff Bezos also have the work of their foundations and donations castigated, their virtual statues torn down, their reputations smashed, and their children’s children hated for the actions of their ancestors?[xxii]
It is difficult to draw firm conclusions from the above reflections, and everyone will have somewhat differing views about them. They are intended to raise difficult questions and encourage open debate on them. I have tried to focus on slavery alone, although clearly this intersects, especially at this time in history, with other categories of contemporary interest such as race and colonialism. However, these reflections are explicitly not intended to address either of these other two categories in any detail. Slavery has existed between and within many different races; it has transcended most modes of socio-economic, political and cultural formation. It is not unique to the Triangular Trans-Atlantic slave trade. There has been a considerable amount of research done on the history of slavery and very much more that needs to be done. However, history alone is not enough. It is the moral questions that we ask, and how we use them to shape the futures of the societies in which we live that, to me, matter most.
The above arguments suggest to me that it is more important to focus on trying to reduce contemporary slavery (and its possible variants in the future) than it is only to protest about the horrors and injustices of past slavery. Both are important, and this is not to belittle the value of highlighting the undoubted injustices of slavery in the past. However, we cannot change what has happened in the past, and it is surely therefore our responsibility to past slaves that we act now, when we can, to prevent slavery continuing into the future. Protesting is the easy bit; changing the future is when the going gets really tough. Others may well feel differently, and I certainly accept that we need a sound understanding of the past if we are to act wisely in the present. I began by reflecting on my surprise at how few of the anti-slavery and anti-racism protests that I saw in 2020 and 2021 focused on modern slavery. My hope is that those who read and engage with what I have written here may turn their anger at what they cannot change into energy to reduce the extent of slavery that remains all about us today. I also hope that they will strive to maintain the perceived freedoms that so many now cherish and take for granted, and yet are in very real danger of being taken away from us through the increasingly all-pervasiveness of digital enslavement.
[i] I am immensely grateful to several friends and colleagues who took time to comment on an earlier version of this draft and have undoubtedly helped me to improve it. I know that the issues it addresses are sensitive, but I hope that this final version strikes an appropriate balance as I seek to encourage us all to refocus our attention on how we eliminate the modern slavery (and especially violence against women) that continues to exist across the world.
[ii] I have deliberately used this word here because I remain struck by the reality that the lives of some slaves in the past were in many ways better than the lives of the poorest agricultural labourers.
[iii] There were indeed some banners relating to modern slavery, but from the protests and images that I saw these were in a minority.
[iv] This was also associated with transfers of ideology and practice from the US to the rather different context of the UK.
[v] This is not in any way to downplay the horrors of the slave trade between Africa and the Americas between the 17th and 19th centuries, but it is to try to explore fundamental principles associated with slavery per se rather than racism.
[vi] See for example, Abraham Farfán and María del Pilar López-Uribe (2020) The British founding of Sierra Leone was never a ‘Province of Freedom, https://blogs.lse.ac.uk/africaatlse/2020/06/27/british-founding-sierra-leone-slave-trade/. It is also important to note here that it was actually in the UK, a colonial and later imperial power, where the abolutionist movement first gained considerable traction, initially in the late 18th century and then especially from the 1830s onwards.
[vii] The Province of Freedom in what became Sierra Leone was first settled in 1787 by formerly enslaved black people, but this early settlement collapsed, and it was not until 1792 with an influx of more than a thousand former slaves from North America that the settlement of Freetown was firmly established through the agency of the Sierra Leone Company.
[x] See for example the life of John Newton who had been a slave, a captain of slave ships, and then championed abolitionism, as well as writing the famous hymns Amazing Grace and Glorious things of Thee are spoken.
[xiii] In my own work on medieval society, I found it helpful to avoid the generic word “serf” and stick to the terms actually in use at the time, such as villeins, cottars and bordars. In very general terms, in 11th century England there were two broad groups of rural people beneath the level of knights and lords: the free peasantry (freemen and sokemen) who comprised about 12% of the population recorded in Domesday Book of 1066; and the unfree (villeins representing about 40% of the population, alongside the poorer cottars and bordars) who worked the land in return for onerous obligations and services to the Lord. Beneath them all were the slaves, comprising perhaps 10% of the population, who had no property rights and could be bought and sold.
[xiv] Although Louis X of France published a decree in 1315 declaring that any slave arriving on French soil should be declared free, the widespread rise of abolitionism is usually dated to the emergence of The Enlightenment in the mid-18th century, and the activities of the Quakers in England and North America in the latter part of that century. Interestingly, although slavery was abolished during the French revolution, Napoleon restored it in 1802 as one means to try to retain sovereignty over France’s colonies.
[xvi] I have deliberately concentrated here on slavery in a global context, and not just on the current emphasis in European and North American societies on the trans-Atlantic slave trade. The horrors, misery and death associated with slavery in the context of European colonialism should not be trivialised, but at the same time their needs to be open and honest discussion about the existence of slavery in Africa long before the arrival of white Europeans.
[xviii] See https://www.slavevoyages.org/, as well as extensive other research by Franz Binder, Ernst van den Boogart, Henk den Heijer and Johannes Postma, James Pritchard, Andrea Weindl, Antonio de Almeida Mendes, Manuel Barcia Paz, Alexandre Ribeiro, David Wheat and José Capela.
[xix] especially in the context of my teaching of Marxist theory between the mid-1970s and the end of the 1990s. See also the work of the UCL Centre for the Study of the Legacies of British Slavery.
View of the hills and castle above Tbilisi on a later visit in 2005
A friend recently lent me his copy of John Baker and Nick Place‘s (Viking, 2020) Stalin’s Wine Cellar about John and Kevin Hopko’s travels to Tbilisi in 1999 to identify Stalin’s wine cellar and subsequently to try to sell some of its more famous wines. I had always wondered what had happened to these wines after my own visit to the cellars a couple of years earlier in 1997. Various later conversations with my Georgian friends had told me some of the story from a Georgian perspective, and so it was fascinating to read this exciting account by the Australians who had endeavoured to release the wines onto the market- described by the publishers (Penguin, 2021) as the Raider’s of the Lost Ark of wine”. Fun to think that I had beaten them to the treasure! I was not, though, interested in buying the wine or trying to sell it through the auction market.
My lasting memory of a serendipitous visit to the Savane winery, was that much of Stalin’s cellar actually seemed to be full of gin bottles! I had walked past the entrance, almost hidden in a wall (top left picture below) several days earlier, and asked friends if it was possible to visit. Miraculously, later in the week I was able to visit, and the pictures below (converted from my old slides/diapositives) show something of what the winery was like. The lower left picture illustrates a rack of bottles, not dissimilar to images shown in John and Nick’s book. My hunch is that Stalin himself probably preferred gin and vodka to fine wines, other than of course wines from his homeland Georgia (the brandy served at the famous Yalta conference in 1945 was the Armenian ArArAt brandy). Alternatively, someone in the intervening half century may simply have used this part of the cellar for storing away gin! I never encountered the famed cellar of Tsar Nicholas II with its very old wines from renowned Bordeaux châteaux, which is reputed to have been split between the Massandra winery just outside Yalta in the Crimea and the Savane winery (Stalin shipped the Tsar’s cellar from Massandra to Tbilisi in 1941 to prevent it falling into German hands, and the wines were then reported to have been returned to Massandra in 1945). In hindsight it would have been fascinating to have asked if I could have explored Savane further. What wines were really there, and might some have been stashed away in Savane, never to be returned to Massandra?). Wines from Massandra were auctioned by Sotheby‘s in 1990, 1991, 2001 and 2004, and I remember being fortunate enough to taste some of the Crimean wines available through these sales, but sadly tasted nothing historical from the Savane winery on any of my visits to Tbilisi.
So many anecdotes could be written about fascinating times spent in parts of the former Soviet Union during the chaos of its disintegration. As for the Savane winery, I was told on good authority that complications in determining the ownership had prevented sales during the latter 1990s and 2000s, but it remains remarkably difficult to find out anything about what really happened (some pictures were shared on Facebook in 2015). Another recollection of that 1997 visit was that despite my best efforts to find wines then being produced locally in Georgia, I was most definitely recommended only to drink the wines that had been shipped and bottled in the Netherlands and Belgium before being returned for consumption in Georgia. The quality of the bottling line shown above might explain some of my hosts’ concerns! However, I am certain that I did not often follow the advice I was given. I so look forward to returning to Georgia again before long, and especially to revisiting the Kakheti vineyards and tasting some of the wonderful wines made there. I still wonder where the Tsar’s wines are now.
Vines overhanging lunch tables on the way to Gelati, 2005
Going through my mother’s many papers recently, I discovered this document – a 1984 summary of the computer training that she had introduced to the school in the early 1980s. The remaining pages that can be seen through the thin paper continue with details of the syllabus.
I’m sharing it here, because for me it reminds me of four very important things:
There is actually a long history of computer learning (and the use of digital tech for other types of learning) in schools, going back at least forty years. We should surely have learnt how to do this well in that time, and yet so many initiatives do not learn from the lessons of the past, reinvent the wheel, and make the mistakes that we made beforehand!
My mother taught at that time in a single sex primary school, and I have no doubt (from the messages I have received from those she taught at this time) that the girls she taught gained as good a digital training as any at the time, and probably very much better than most. We need to remember therefore that initiatives to teach girls to use digital tech have also been around for a long time, and yet we still don’t seem to have learnt the lessons well aboout how to do this!
Although my mother was a maths teacher, it is great to see that she was not only teaching the girls to use computers for maths, but also for music and writing, and that she was using quizzes and games in her teaching.
A final striking feature is that even back then she noted that about half of the girls had a computer at home (although I wish I knew whether this meant that it was their own computer or that they had access to a family computer). It remains essential for girls to have easy access to digital tech outside the school environment if they are to be able to use it effectively for their learning.
I hope others find this re-discovery as exciting as I do! The mention of BBC, Spectrum, ZXB1, Vic 20 and Commodore computers brings back so many memories of the early days of using computers in schools (and indeed in universities) at the time.
The soundbites from the widely acclaimed success of COP 27, especially around the creation of a loss and damage fund (see UNCC Introduction to loss and damage), made me look once more at the realities of global CO2 emissions to see which countries are actually generating the most CO2, which are improving their performance, and which are suffering most. Sadly, this only made me appreciate yet again that the over-simplifications that occur during so many UN gatherings such as COP appear to be more about political correctness and claiming success than they do about developing real solutions to some of the most difficult challenges facing the world.
The UN Climate Press Release on 20 November summarised the outcomes relating to the fund as follows: “Governments took the ground-breaking decision to establish new funding arrangements, as well as a dedicated fund, to assist developing countries in responding to loss and damage… Parties also agreed on the institutional arrangements to operationalize the Santiago Network for Loss and Damage, to catalyze technical assistance to developing countries that are particularly vulnerable to the adverse effects of climate change”.
Unfortunately, it is not quite as easy as it might seem to validate the claim underlying this that it is the rich countries who do most of the pollution and should therefore compensate the poor countries where the most harmful damages from CO2 occur (see, for example, ThePrint, India; UN News, noting that “Developing countries made strong and repeated appeals for the establishment of a loss and damage fund, to compensate the countries that are the most vulnerable to climate disasters, yet who have contributed little to the climate crisis”; and BBC News, “A historic deal has been struck at the UN’s COP27 summit that will see rich nations pay poorer countries for the damage and economic losses caused by climate change”). How should it be decided, for example, which countries should be donors to this fund, and which should be beneficiaries from it? Pakistan, which led much of the discussion around the need for richer countries to fund the poorer ones, was actually the 27th largest global emitter of CO2 in 2019; China was the largest contributor, and India the 3rd largest.
The Table below, drawing on World Bank data (2022), gives the various rankings of the top 30 countries in terms of CO2 emissions per capita in 2019, and CO2 total emissions in 1990 and 2019, as well as the change in ranking of the latter two columns.
CO2 metric tons per capita 2019
CO2 total emissions kt 1990
CO2 total emissions kt 2019
Change in rank 1990-2019
United Arab Emirates
Iran, Islamic Rep.
Trinidad and Tobago
Iran, Islamic Rep.
Egypt, Arab Rep.
United Arab Emirates
Iran, Islamic Rep.
Many important observations can be made from these figures, and I highlight just a few below:
Per capita emissions
The highest per capita emitters are generally those in countries with recently developed hydrocarbon-based economies, such as Qatar, Kuwait, Bahrain, the UAE and Brunei Darussalam, and generally not in the old rich industrial economies of Europe.
Surprisingly, quite a few European countries such as the UK, Denmark and Spain (ranked 52nd-54th) actually lie well outside the top 30 highest emitters
The twelve lowest per capita emitters for which data are available (not shown here) are all African countries.
There are many fewer countries above the world average, at 4.47 metric tons per capita (which would rank 61st) and many more ranked beneath it, implying that the highest emitters are much higher than the lowest are low: Qatar at 32.47, has 28 metric tons per person more than the average; yet, 55 countries have emissions per capita of <1 metric ton.
60% of total CO2 emission are generated by people living in five countries (China, 31.18%, the United States 14.03%, India 7.15%, the Russian Federation 7.15%, and Japan 3.15%). Eleven further countries, all producing more than 350,000 kt CO2 annually account for a further 16.68% of emissions. More than three-quarters of emissions in 2019 were therefore from people in just 16 countries.
Those countries with the lowest total emissions are nearly all small island states (SIDS; not shown in the Table), but note that these were not necessarily the lowest per capita emitters.
The changes in total emissions since 1990 are also very interesting. The highest increases within the top 30 were Indonesia (+16) and Iran (+12), although much higher risers came into the top 30 from below, including Vietnam (+59), Malaysia (+23), UAE (+16) and Pakistan (+15).
These data do not make easy reading for policy makers, campaigners and the UN system as a whole, all of whom like to have simple answers and short soundbites. The world is unfortunately too complex and messy for these. As the world’s popultion passes 8 billion (2.8 times what it was when I was born), population growth is the dominant factor in determining total country-based emissions, but economic growth (following the US-led carbon-based capitalist mode of production) has also played a significant part. The big risers in total emissions are countries with large populations and/or with high economic growth rates over the last 30 years. Neither of these should be surprising. Poor countries, with low economic growth and relatively small populations are never likely to be amongst the largest consumers of energy. Overall, the biggest factor determining total CO2 emissions over the last century, and especially in the last 50 years, has been human population growth (see my recent post on “climate change”). Moreover, there has for long been an intricate and complex relationships between humans and carbon: the carbon cycle and the production of oxygen are essential for human life, and our economic systems have also been driven by carbon as a fuelfor centuries. These complexities make it extremely difficult, if not impossible, to argue that we need to create two groups of countries: one being the recipients of funding (from a loss and damage financial facility), and the other being contributors to it. Instead, we need to work collaboratively together to transform the underying factors causing environmental change, of which CO2 emissions are actually only but a small part.
That is not, though, to say that there should not be much greater global effort to work together to resolve the environmental problems caused by our centuries old carbon-based economy (as well as those caused by so-called renewable energy). It is also completely separate from moral arguments suggesting that there should be a shift in wealth distribution from the rich to the poor. However, these should not be conflated into over-simplistic statements and assertions about responsibililty for climate change, such as those being promoted by UN agencies and mainstream media at the end of COP 27. It is also to reassert that we need to work together with renewed vigour collaboratively across sectors and disciplines to understand better the complex interactions that humans have with the environments in which we live, and then to make wise decisions how to implement them in the interests of all the world’s peoples and not just those of the rich and privileged parts of the world.
The above draft was written on 21 November 2022 (and has been revised slightly subsequently)
In response to the above, Olof Hesselmark kindly asked why I had not added further details also about the spatial distribution of CO2 emissions – something that as a geographer I care greatly about! I responded that I hadn’t wanted to complicate matters further, but also that I guess it was because I am aware in my own mind of these spatial distributions, and the country names (and sizes) are in-built into my consciousness! However, they do add an important additional element of complexity to the discussion, and I am delighted that he has agreed for me to add his slightly cropped map of CO2 emissions per sq km below:
I’m not entirely sure which projection this is, but my preference for such maps is Eckert IV, or other equal area projections such as Gall-Peters or Mollweide that place less visual emphasis on the apparent size of countries in high latitudes. This map nevertheless highlights the varying densities of emissions, with China, Europe and the USA being high, and Africa and Latin America being low. It should also be emphasised that there are enormous differences within countries, as well as between them, with urban-industrial environments generally being much higher in their CO2 emissions than sparsely settled rural ones.
A different perspective once again is thus from the Smithsonian Magazine‘s 2009s map below (carbon emissions from 1997-2010), which does indeed show how a very few areas contribute the largest amount of CO2 emissions.
“Climate change” causes nothing! Yes, read that again, “climate change” causes nothing. It is a result, not a cause. Yet, as delegates at COP27 continue to bemoan the impacts of climate change, promote ways of limiting carbon emissions, and redress the global balance of power and responsibility – as well as enjoying themselves, feeling important, serving their own interests, and basking in the glory of greenwashing (at last there is something on which I can agree with Greta Thunberg about!) – the adverse environmental impacts of digital technologies go almost un-noticed.
This series of three posts seeks to redress this balance, and argues for a fundamentally new approach to understanding and trying to improve the impacts of digital technologies on the environment. It situates the climate change rhetoric within the wider context of human impact on the environment (of which climate is but one element). The first of these posts provides a critique of much of the rhetoric concerning climate change, the second articulates the case for a new approach to understanding the relationships between digital tech and the environment, and the third provides positive suggestions for the next steps that need to be taken if we are indeed to use digital tech wisely to help manage our human relationships with the environment. Throughout it emphasises the need to understand the interests underlying the present rhetoric and practice around the interactions between digital tech, climate change and the environment.
The rhetoric of climate change: itself part of the problem
Changes in the earth’s climate are very real, and have existed since long before humans could appreciate them. The dramatic impact of humans on the world’s weather patterns and climate that have occurred over the last century, though, have only really been recognised and appreciated more widely in the last 40 years, in large part as a result of the dramatic increase in funding given to scientists working in this field. Climate activism and the UN’s interest in appearing to try to do something about it are relatively recent phenomena (the first COP meeting was held as late as 1995). It is fascinating to recall that ground-breaking works in the 1960s and early 1970s about human impact on the environment, such as Rachel Carson’s (1962) Silent Spring, and the Club of Rome’s (1972) Limits to Growth report, focused on a much more holistic view that paid surprisingly little explicit attention to climate. Five key inter-related concerns with the current dominant rhetoric about “climate change” can be teased out from these basic observations.
Over-simplified rhetoric of “Climate change” hides the significance of human impact
The term “climate change” has become so bowdlerized that is has lost any real value. At best, in common parlance it can be interpreted as being a shortened form of “human induced climate change”, but this shortening hides the fundamental importance of “people” as being the main cause of the changes in climate and weather patterns that are being experienced across the world. The expression “climate change” is actually just a collective observation of a series of aggregated changes in weather patterns across the world. It has no explanatory or causative power of its own. It is we humans who are causing fundamental changes to the environment, and these go far beyond just climate. We still know far too little about the complex interactions between different aspects of the world’s ecosystems to be able to predict how these will evolve with any real certainty. “Keep it Simple Stupid”(KISS) quite simply does not work when discussing human induced climate change.
Externalising “climate change”
The use of the term “climate change” also has much more subtle and malign implications, because it externalises our understanding of impacts and thus the actions that the global community (and every one of us living on this planet) need to take. Rather than human actions being seen as the fundamental cause that they are, externalising the idea of “climate change” as a cause means that the focus is subtly turned to finding ways to limit “climate change” rather than actually to change our underlying human behaviours. The classic instance of this is the focus on reducing carbon emissions by developing renewable energy sources – without actually changing our consumption patterns. The very considerable emphasis within the digital tech community on reducing its own carbon emissions and inventing ways through which digital tech can be used to contribute to “green energy” (typified by the ITU’s emphasis thereon) is but one example of this (see further in Part 2). Moreover, at a very basic level, the emphasis on carbon although important, has tended to reduce the attention paid to other contributors to global warming, such as Nitrous Oxide (N2O) which has a Global Warming Potential (GWP) 273 times that of CO2, or Methane (CH4) which has a GWP of 27-30 times, for a 100-year timescale (USA EPA, 2022).
The focus on climate means that wider environmental impacts tend to be ignored
Focusing on “climate change” in general, and rising temperatures (global warming) in particular, has had a very serious negative impact on the ways in which other environmental parameters are considered and affected. In essence, “climate impact” often trumps most other environmental considerations, even when at a local scale other environmental impacts may actually be very much more serious. In reality, climate is but a part of the wider interconnected world in which we live, and for a more sustainable future it is essential to adopt a comprehensive ecosystem approach to understanding the full environmental impacts of any intervention. But one example of this is the way that batteries are now required to store “renewable” energy from solar panels or wind turbines, and the resultant serious environmental degradation caused by mining for lithium in Chile, Australia, Argentina and China (note too that total global reserves of lithium in 2018 were only 165 times the annual production volume, and demand is increasing rapidly).
Sustainable development, climate change and economic growth.
I have long argued that the term “sustainable development” is a contradiction in terms, and that the Sustainable Development Goals (SDGs) alongside the UN’s Agenda 2030 are deeply flawed, not only in implementation but also in design (see Unwin, 2015, 2016, 2017, 2018, 2021 and 2022). In essence, while development is largely defined in terms of economic growth, it is difficult to see how it can be compatible with sustainability when defined as the maintenance of valued entities. A deep flaw in much of the global “climate change” rhetoric about the use of renewable technologies to replace energy based on hydrocarbons is that it still tends to be combined with an economic growth agenda based on technical innovation. It does little, if anything at all, about changing global consumption patterns, the “perpetual growth” model, and the underlying capitalist mode of production (see Unwin, 2019). Indeed, elsewhere, I have often reflected on what a “no-growth” model of society might look like.
One of the core problems with the dominant global rhetoric around climate change (as expressed particularly in COP27, but also in much popular activist protest) is that it does not sufficiently tackle the fundamental challenge of population growth and increased consumption. The two simplified graphs below illustrate the scale of this basic problem.
The broad similarity in these two curves is striking. More than anything else, it has been the overall global growth in population over the last two centuries, enabled in large part by the enterprise associated with the individualistically based capitalist mode of production that has driven the environmental crisis of which “climate change” is but a part. The controversial film Planet of the Humans (Produced by Michael Moore) makes similar arguments, and it is unfortunate that its many critics have tended to focus more on some of its undoubtedly problematic points of detail rather than the crucial message of its overall argument (see Moore on Rising). The “capture”of the UN system by global corporations, exemplified by the large numbers of business leaders attending COP27, seems to confirm one of Moore’s core arguments that these companies are now driving much of the climate change agenda.
If the world’s peoples really want to “mitigate the effects of climate change”, there needs to be a dramatically more radical change to our social, cultural, political and economic systems than has heretofore been imagined, and this needs to begin with a shift to more communal rather than individualistic systems, a focus on reducing inequalities rather than maximising economic growth, and the crafting of a more holistic approach to environmental issues rather than one primarily focussing on carbon reduction to “solve” “climate change”.
Who benefits most: understanding the interests behind “climate change” rhetoric
Social movements, economic practices, cultural behaviours and political systems do not just happen, they are created by those who have interests in making them happen and the power to do so. This is as true of the “climate change” rhetoric and movement as it is of any other. Five particular groups of people have shaped and sought to take advantage of this. First, have been the scientists who have believed in the importance of this issue and have sought to build their careers around it. Academic careers are not neutral, and the story of how they built coalitions and peer networks, influenced research councils and political groups, and helped to forge a global “climate change” agenda that served their own interests is a fascinating one that remains to be told. Second, have been private sector businesses and corporations big and small who have sought to influence global policy and profit from a shift from hydrocarbons to renewable energy. This has been fuelled by the fetish for innovation, and the idea that technological change can inject a new impetus to economic growth. Their lobbying of governments to subsidise many of the start-up costs of renewable energy technologies, to overturn existing environmental legislation to permit the creation of new industrial landscapes in the name of solving”climate change”, and to enable consumers to afford to purchase them through further subsidising their energy costs, has been hugely successful. The global capitalist system, utterly dependent on economic growth, is ultimately leading ever more rapidly to its own environmental catastrophe. Third have been those who enjoy the thrill and camaraderie of political activism who have found in the simple “climate change” mantra something that will unite many of their common interests. Fourth, has been the UN system with all of its distinct agencies, each of which has found a cause around which to promote its identity as contributing in a worthwhile way for the benefit of humanity. Finally, have been the politicians, eaager to be seen to be doing “good”, and to contribute to a worthy international cause, in the interests of enhancing their own political careers.
The trouble is that it is not “climate change” itself that is the problem. Instead it is these interests, shaping the rhetoric of climate change, that have helped to exacerbate the very real environmental damage that is being caused to this planet. Self-interestedly promoting the rhetoric of “climate change” is of course much easier than it is to tackle the real roots of the problem, which lie in the economic, political, social and cultural processes that they too have crafted over the last half-century.
Part 2 of this trilogy of posts examines how these arguments apply in the context of the digital tech sector, and Part 3 calls for a dramatic new approach to balancing the environmental harms and benefits of the creation and use of such technologies,