“The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.”
Fans of sci-fi may recognize this quote from Terminator 2: Judgment Day. As a child, the scenario it portrayed utterly terrified me. It’s not the first movie that featured machines becoming self-aware. Stanley Kubrick’s 2001: A Space Odyssey is famous for the calm, monotonic but spine-chilling utterance from HAL 9000: “I’m sorry Dave, I’m afraid I can’t do that”.
Movies and fiction of the 1970s-1990s are not short of scenarios depicting the conversion of digital, computerized technology into a human state of consciousness. Artificial Intelligence was, until not too long ago, seen as a rapid growth of computerized learning that would result in a sudden transition to an intelligent life form. It’s not an unrealistic end game that a technology can become self-aware. This video, for example, shows that technology is capable of passing a form of the ‘King’s wise men’ test of self-consciousness.
Looking at it from today’s lens, we should be both relieved and concerned at the same time. We should be relieved because the SkyNet incident from Terminator 2 hasn’t happened. We are still nowhere near a level of AI that would genuinely replace the complexity of the human brain. AI has no doubt come a very long way, but it can currently only reliably replace the simplest, most predictable decisions that humans make. Even machines that can win at Jeopardy or Go do so because they can process a more complex set of scenarios than humans at any given point in time, but only within a defined universe of possibilities. Original thought is still out of reach of the robots.
We should be concerned, though, because it is developing in a more pervasive way than anyone predicted. AI is not the sole property of a government or military organization as was expected decades ago. It belongs to us all. It is being grown and developed in all corners of the world. It is hostage to our natural urges to improve technology, and this may make it unstoppable. There’s no single chip to destroy, like in Terminator. In this democratized and pervasive form, perhaps nothing can stop the growth of AI. Many will argue that nothing can control it, either.
Why should we be worried about AI?
While there’s little chance of Alexa becoming self-aware and ordering massive quantities of movies starring Tom Selleck (I hear she is a fan), we should be worried about AI for very different reasons — economic ones. There have been periods in history where the average worker has really lost out from the development of technology, and we may be at the beginning of one of those — and an extended one at that.
Only one period in history comes close to paralleling the massive growth in technological development we have experienced since the 1970s: the Industrial Revolution, particularly in the UK and US. The Industrial Revolution drove immense social and economic change, resulting in manual jobs being replaced by machines, mass movement of people, a disruption in the share of wealth among the population, and permanent changes in global trade.
Professor Bob Allen of NYU is an expert economist who regularly looks to historical parallels for insight on today’s issues. In a recent talk of Bob’s I attended, he showed the results of some very detailed analysis that he did on productivity and wages in both the Industrial Revolution and in the last 30–40 years. Looking back to the period of industrial change in Britain between 1770 and 1890, Bob studied its impact on the growth of productivity and the growth in real wage (ie affordability of basic goods based on income). This chart summarizes his findings:
The pattern is clear. In the first 60 years of the Industrial Revolution, investors benefited at the expense of the worker. Only from 1830 (when, Bob argues, all major mechanization changes had been completed) did wages start to move with productivity growth. Bob’s analysis also shows similar results for the US.
Looking at this in the context of today’s situation, the parallels are strong. Since the 1970s we have been in a similar period where the worker is losing out on their share of the benefits of technological development. The next chart shows the development of the trend over the longer term in the US:
The period 1980 to today is particularly worrying. Are we to expect that, like in the Industrial Revolution, wages will soon catch up? Bob Allen is pessimistic about this, and I can understand why. We are only just at the beginning of the AI revolution, only a small number of jobs have been affected to date, the disruption has decades more to go. The average worker may need to hunker down for the long haul.
Our three choices
We don’t have a robot with an Austrian-American accent to send back in time to solve the emerging inequality that this disruption is creating, but are there things that we can do to keep it under control? I don’t pretend to have any answers here, but I regularly hear three points of view on this question:
- Interventionism: The call for policy intervention is growing. At a recent talk I attended, a quote by Lord Robert Skidelsky, the acclaimed economist, stuck with me: “Even techno-utopians now have to believe in policy intervention”. Is it possible to slow down or even stop AI if we believe it is harmful to people’s well being or need to work? What kind of policy or international body would be needed for this to be effective?
- Laissez faire: Dare I say, the majority of opinions I hear fall into this category. Let’s just ride the wave and hope for the best. What drives this? A feeling that we cannot realistically control or predict what will happen? Such a deep and compelling interest in AI that people are willing to risk their own economic well-being to see where it ends up? A faith that things always balance themselves out economically in the end, even if not in our lifetime?
- Fight AI with AI: Perhaps the most interesting work in this sphere is in trying to predict some of the likely developments that threaten people’s economic well-being and the possibility of using AI approaches to counteract these threats. Studies by the McKinsey Global Institute have helped us better understand how ‘automatable’ different jobs are, while academic work by the likes of Doyne Farmer promises to shed greater light on how ‘portable’ certain human skill sets will be when faced with robots taking their jobs.
Meanwhile, as we make our minds up, somewhere in the world another technology is being iterated that will change our lives for good or for bad in years to come.
The clock is ticking. Which option would you choose?
See here for more details on Bob Allen’s research.