Right after I finished my thesis, word started spreading in my university that a website – a computer program, rather – was able to write, think, and perhaps more importantly complete almost any and all tedious assignments in the blink of an eye. Like wild fire, this program was set to challenge the educational system as a whole, as kids in elementary school to graduates working on their final thesis began harnessing the power of this new technology. It quickly became a topic of concern across educational institutions all over the world, as the quality of work this program was able to produce was, at least ostensibly, eerily like that of a human. How was it to be distinguished or identified as such? Beyond the obvious use of this technology by students to save time and effort in their labour, the potential of such technology was invariably also recognised by anyone who needed a quick email written, long document summarised, or outline of any subject provided. Labour of any such kind, was set to be expedited in unprecedented ways.
This dynamic between technology and labour, made me think of the ways in which technology of the past, particularly during the first industrial revolution, had an effect on the broader socioeconomic dimensions of our society. How technological tools both expedite and replace human labour, and how the increased productivity is reflected by societal welfare as a whole. There are always winners, and losers, when technology sets to compete with humans in terms of labour-output. At first instance, with the Industrial Revolution, it was pertinently clear that societal welfare was to come at the cost of generations that wrote regulations with their blood.
Will that be different this time around? Well, it may first be worth asking if we can even compare todays instance oftechnological progress to that of the technological revolutions of the past. Is the development of AI a signal of another technological revolution?
From the Computer Revolution to AI – another Technological Revolution?
Why do we develop new technologies? Some might argue that it is instinctual to us, a very part of what makes us human. To manipulate our surroundings, build tools, shape our world to be our own. Major technological stepping stones in humans history, from the advent of the wheel to that of computers, have constantly driven us towards daunting new heights in the frontiers of progress. To survive, ultimately. The world we have built today reflects that more than ever, where technology has truly become an integral part of our every day – I certainly could not imagine a day where technology does not play a central role in my life. From the phone in my pocket to the laptop at my hands, even ten years ago technology did not seem as omnipotent as it is today. It is easy to take every little improvement and innovation for granted, as the excitement from the iPhone today shifts attention to the better one that will follow tomorrow. As someone born in the late ’90’s, I think many of my generation, and those since, tend to forget how far we have come with computerised technology. Even the most powerful supercomputer just 30 years ago – the CRAY 2 – is about 5000 times less powerful than the iPhone 12. And just judging by phones alone, the changes to our society have been unprecedented. It seems that computers are getting better by the day, bringing much more than just fancy phones, continuously transforming not only the way we live but also how we work. Almost anyone working in todays economy relies on computers in one way or another, following the creeping process of computerisation throughout the last few decades. If this is really a transformation of society, then, it may be worth asking: has a new stage of technological revolution already begun or is this the just evolution of technology?
Beyond computers themselves, at the advent of an entirely new class of machine today – broadly termed Artificial Intelligence, or AI – this question may be worth seriously considering. Simply put, AI is a computerised variety of algorithmic networks, some of which can emulate the way our brains receive, store, use, and communicate information. Trained on vast quantities of data (essentially digitalised information) AI’s have become startlingly adept at providing ‘intelligent’ output, possibly closing in on that of a human, and potentially surpassing us in the future. Borne from the ever more powerful computers we develop, these systems are tasked with approaching problems ‘intelligently,’ although limited by the ‘hard logic’ that follows its algorithmic foundation at the present. Recent developments in language AI systems, for instance, have provided for programs that can actively communicate and complete complex tasks, some of which are even better and faster than humans. The AI company OpenAI released a “Pre Generative Transformer” (GPT) at the end of 2022 that would quickly become a sensational display of this potential of AI technology. From stories to poems and songs, emails to essays, trained on the data of millions of websites, this is just one application of AI. In its varieties, some AI can generate and interpret visual material, animate 3D models, edit content, or even write its own code. The implications of this technology are justifiably concerning.
Be that as it may, it is safe to assume that the instance of a new technological revolution is thus already well on its way. The speed at which computer technologies have been advancing generally is certainly going to confirm that as fact when historians will look back at this time and judge for themselves. Even at current estimations, and my own judgement, the AI revolution has commenced. Researchers from OpenAI and the University of Pennsylvania estimate that 19% of US workers alone have 50% or more of their tasks susceptible to automation by current AI technologies. Across the board, AI technology poses to automate various tasks done by humans in current economies. The potential disruption in todays, and tomorrows, labour market this could cause is enormous. Whatever the outcome will be, it will be a consequence of the choices we make today. What do we plan on doing with AI technology? Who is AI technology ultimately going to benefit? To approach this conversation, it may be useful to look at what impact machines have had on our society in the past few centuries.
Industrial Revolution at first glance and the Engels Pause
Perhaps the first industrial ‘machine’, the ‘Spinning Jenny’, was a relatively simple device. Invented in England around 1764, it connected eight to sixteen threads to a wheel that a knitter turned by hand, enabling them to spin cloth into more threads than ever before. Although still powered by hand, this technology set the stage for modernity: automation of labour. The 18th century was a particularly active time when it came to technological innovations of all kinds. With the steam engine emerging around this time, the initial stages for the industrial revolution were set. Where machines automated, and augmented, labour, industries and wealth flourished. The era of automation we still live in today had begun. In Manchester, perhaps the first city to truly begin industrialising, this transformation was especially well recorded. Friedrich Engels, who in his 1845 book “the Condition of the Working Class in England“, collected his observations of the changes in society following the adoption of machinery into the economy. He considered the changes technologies to be incurring to have “altered the whole of civil society.” Entrepreneurs formed an emerging industrial class, growing wealthy from the increased output that followed the mechanisation of their production. Yet little of this wealth was seen by the labourers working the machines, as wages remained constant amidst the rise in capital savings and investment. This was a key mechanism of the industrial revolution, as it required capital investment to sustain it. That is, the capital such from the machines to the manufactories that harboured them comprised. Indeed, this appeared to be a pattern in all of Britain following the implementation of machines and the competitive pressures producers faced without them. An emerging industrial class would both be the drivers and benefactors of the technological revolution, rooted in the laissez-faire philosophy of the time. Coined ‘Engels Pause’ by economist Robert C. Allen, it would take roughly eighty years after the invention of the Spinning Jenny that increased levels of output incurred by new technology would mean for an increase in real wages. A small measure, and the social toll will likely remain indeterminable.
Although Allen principally presents the phenomenon in economic terms, the implications of Engels Pause also extend into the sociopolitical dynamics of the time. Legislative questions and debates mostly followed the laissez faire attitude of the time, leaving little room for the interests of workers. Expected market self-regulation was policy, enforced by the ‘Combination Acts’ of 1799 and 1800 that prohibited unions entirely in Britain, leaving workers entirely exposed to market forces. Not without recognition of this fact, a number of occasions called for violent resistance by workers.
Groups, such as those broadly termed the Luddites, engaged in wrecking machinery as their only measure to exert some say in matters of wages. While workers were able to keep wages constant this way, “the triumph of mechanisation was inevitable,” as written by British historian E. J. Hobsbawm, and policy was seriously lagging. This is what Engels Pause was a consequence of: the inevitability of machinery and our approach to its ends.
The Dangers of a Laissez Faire Technological Arms Race
Engels Pause was the consequence of policy enabling the market to direct technological revolution, and letting these dictate the progress of AI might trigger another Engels-like pause today. Without a regulatory framework in place, there is nothing to stop the general adoption of AI in the global economy, because it is likely companies will save on the human resources required to operate a business. It is the common pattern that followed the introduction of thosetechnologies that marked the industrial revolution, and to expect any different today would be naive of that fact. This brings us back to the question: what do we plan on doing with technological revolution today? A nuanced question to be sure and one with innumerable answers. Retrospectively, the very concept of machinery has come to describe a miscellaneous collection of technologies, some of which have changed the world for the better, and some certainly for the worse. It is likely going to be the same with AI, a technology that already finds applications in sectors as diverse as healthcare and warfare, for example. From active diagnosis of diseases to autonomous drone swarms, what is certain that the ends of AI will hinge on our ability to manage the interests driving its development. The trajectory of greater uncertainty following the development of AI technologies would limit proactive policymaking in the future.
However, the fact of their potential in terms of labour-output and augmentations to productivity, taken in terms of what we have learned from the past, does provide with at least some element of certainty. Without close legislative oversight of the development and direction of AI, we provide commercial pressures with the prerogative of directing technological revolution – which will likely have those effects as anticipated by an understanding of the Engels Pause of the past.
With all of its potential, if we treat AI technology as a form of capital, it is telling that most of it today is owned by private companies. As capital, the simple fact of private ownership already makes AI safety difficult to manage, especially since all aspect of AI related safety measures are in their infancy. Without access to what exactly these companies are currently training their AI to do, and little regulatory oversight, what will happen if AI breaks ever more thresholds of intelligence and output? This is an unimaginable power we are speaking of, left in the hands of private actors who’s ultimate ends are up to their private desires, whatever those may be. That adds an immense element of uncertainty. Not to mention, we are already increasingly observing some AI systems to have so-called ’emergent capabilities’ – perhaps a sense of a system that evolving on its own – the more powerful they become. The principle of “emergence” as such, coined by Nobel-Prize winning physicist Philip Anderson, describes this phenomenon: the more complex a system, the harder it becomes to understand it when looking at its fundamental parts. This is true for all things, from mathematics to biology, perhaps accounting for even our own consciousness. For AI, this a dangerous law of nature that humans are playing with, perhaps a crucial part of the characteristics of AI to come. However, it is also a matter our relative understanding of AI, and policymakers need to be up to speed with setting adequate controls.
The entire process of AI development needs to be democratised, and potential consequences of AI adoption need to be managed collectively, not by a handful of companies led by a few individuals. This may thus be a good starting point for policy to step in and gain an informed understanding of what AI technologies are being built to do – before they are released as products and adopted supplementarily for labour into the global economy. AI technology is a great responsibility and cannot be left to self-regulation without the expectation that it may transform our society faster than we can direct. In an open letter from the NGO Future of Life Institute, signed by experts and industry leaders in AI asked: “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civili[s]ation?”. Doomsday scenarios aside, what is currently concerning is that a few companies already have a large degree of power in matters of the usage and applications of AI. Worryingly, these companies have a strong incentive to develop ever more powerful AI technology, that may be able to evolve itself further, perhaps even without human intervention. Risk evaluation along the way is then ultimately concentrated in the hands of the few, and without telling how and at what rate AI will advance in the years to come, market forces could outweigh our ability to manage them. Currently, access to their AI systems is already tightly controlled by the private enterprises that are developing them. Even for independent research the AI output is first evaluated, and scored, by the companies themselves.
Towards a Regulated AI
It is difficult to suggest or identify solutions to the hurdles AI presents us with in domestic or national terms. The issues AI is set to accentuate, such as the wealth inequalities that have been on the rise for decades, or fragile international supply chains, are global issues after all that require a coherent and cohesive approach from everyone involved and affected. Exemplifying this need for global cooperation were the market shocks of the recent Covid pandemic, where many techcompanies flourished, all the while, over 100 million people were pushed into poverty from 2020 and 2021. Billionaires all over the world, on the other hand, increased their wealth by $4.4 trillion. That is, a group of roughly 2500 individuals, many (if not most) will benefit greatly from the potentials presented by AI. Given these dynamics alone, what can we expect with an AI that would benefit margins of society while affecting hundreds of millions of people negatively? A net negative effect of AI, as with a global crisis such as the Pandemic, was little deterrence for seekers of profit, ready to exploit a globally changing societal landscape.
Cognisant from the past, we need to tread carefully in our journey into ever greater technological frontiers. The sheernumber of (real and potential) applications of AI makes it truly difficult to assess in terms of events such as the industrial revolution, but the parallels that can be drawn warrant serious considerations in how we approach new technologies today. Just as the mechanisation of society changed the nature of capital during the Industrial Revolution, so too has the digitisation of our global society during the last decades done the same, and AI poses to contribute this development further.
Technology companies that specialise in computer technologies are some of the most globally present companies in the world. The opening of the world economy has pressured corporate taxation immensely since the 1980’s, falling by roughly 20% since. Given the global levels of agility of new types of capital used by technology companies, this presents a challenge to policymakers and risks capital flight into more lenient countries. More than just a taxation policy challenge, this also applies to regulation in the development of AI. This just heightens the amplitude of careful legislative oversight. However, national tax-regulation is likely not sufficient to manage the technological revolution we are standing at the precipice of. AI and society are meeting on a global level, and the stakes of an incoherent approach in domestic and global policies are high. It is also for this reason that calls for an international tax reform are getting louder, as much of generated wealth today goes to increasingly smaller margins of our society. One-hundred-thirty countries have agreed with the Organisation for Economic Co-operation and Development to aim for a global tax rate for multinational enterprises of 15% starting next year. If properly implemented, this would be a milestone for ourglobal economy and crucial for more specific taxation laws in the future, particularly related to AI.
Whatever forms AI will ultimately take, for the better or worse of humanity, it will undoubtedly change the world in unprecedented ways. We still have time to direct some of that change, and ensure an equitable benefit of AI for the whole of society. This will require legislative oversight for both the development and adoption of AI technologies. Because AI can mean a lot of things, this is certainly going to be a difficult task, but we do have at least somecertainties to work with. To start, we do know that AI exposes many jobs to automation in our labour market while also holding potential to augment human labour. We also know that AI research is currently mostly private, with competitive advantages setting the precedents of the current technological revolution, with minimal regulatory controls.
Finally, we are also aware that technologies of revolutionary proportions in the past contributed to entire new modalities of wealth inequality. Together, these dynamics are setting the stage for a new Engels like Pause. We cannot fall into the same fallacious trap as we did during the first industrial revolution, relying on market pressures for innovation with the expectancy of greater welfare for all. Regulatory bodies need to step in to bring a measure of balance to the forces of technological evolution and its revolutionary implications for society. We have the capacity to sustainably work with the AI industry and build frameworks of collective responsibility and bargaining power. Else, what social cost are we willing to pay for technological progress?