As the great wave of digital technology breaks across the world, artificial intelligence creeps increasingly into the very fabric of our lives. From personal virtual assistants and chatbots to self-driving vehicles and tele-robotics, AI is now threaded into large tracts of everyday life. It is reshaping society and the economy. Klaus Schwab, founder of the World Economic Forum, has said that today’s AI revolution is “unlike anything humankind has experienced before”. AI is not so much an advancement of technology, but rather the metamorphosis of all technology.
This is what makes it so revolutionary. Politics changes dramatically as a consequence of AI. Not only must governments confront head-on the fallout from mass replacements of traditional jobs with AI, algorithms and automation, they must ensure that all citizens are adaptable and digitally literate. It will be fundamental to almost all areas of policy development.
In many respects, what we see today is a new agenda. The United Nations has predicted that the number of people aged 60 or over will double between 2015-50 to nearly 2.1 billion, accounting for 20 per cent of the world’s population. This will coincide with falling birth rates in many countries and could result in a “demographic time bomb”. Falling tax revenues and increasing welfare payments will significantly challenge governments across the world.
The consequences of our increasingly automated global world involve a shattering of political orthodoxies. Some, such as Boston Consulting Group partners Vincent Chin and Christopher Malone, have argued that such times call for bold new thinking – such as a Universal Basic Income (UBI). Trials of a UBI are underway in Finland, Canada, Brazil, the Netherlands and parts of Africa to see if the fallout from widespread labour force disruption and job losses can be better managed.
The impact of AI on our economy, welfare and social interactions – on the human condition –is something on which techno-pessimists and techno-optimists fundamentally disagree. On the one hand, the pessimists argue that technology will create far more problems for humanity than it solves. It will take our jobs, intrude on our privacy and possibly escape our control. It could imperil our way of life. As the late Stephen Hawking commented: “AI could spell the end of the human race.” The pessimists argue that technology will become a self-perpetuating machine, with AI rendering humans increasingly obsolete. Reducing our reliance on AI is the only way to resolve what could become an existential crisis. Futuristic novels and movies perpetuate this view with their dystopian narratives.
Techno-optimists, on the other hand, believe technology will improve the world in unimaginable ways and at a scale incomprehensible to us today. It will do this by learning from itself in an exponential trajectory of mutual benefit to individuals and the economy. AI is poised to radically extend economic prosperity. We are in a conundrum with AI and its larger impacts. While its influence and penetration are growing, there has been no noticeable evidence of improvements in productivity across the developed world.
AI cuts to the very core of our lives, deeply influencing and restructuring social relations and personal identity. In my new book The Culture of AI, I argue: “The complex ways in which people interact with new technologies fundamentally reshapes the further development of those very technologies. One of the central distinguishing features of advanced AI … is that the boundaries between humans and machines have – to a considerable extent – dissolved, which in turn promotes ever-growing opportunities for human-AI interaction in diverse robotic ecosystems.” My point is that the interfaces between humans and machines have deep implications for the way we work, live, socialise and interact on the most fundamental levels. The mouse and keyboard are on their way to becoming obsolete technologies, replaced by intelligent natural language systems that are seamlessly integrated into our daily lives.
For a mind-spinning array of examples, look to Deloitte’s 2018 Artificial Intelligence Innovation Report. The report notes that deep learning over traditional machine learning is well underway, with each succeeding layer in a network learning from the previous layer. “It is now possible to create deep learning neural networks which operate fast enough and accurately enough to have practical real-world uses,” the report says. “Because of this, we are experiencing a paradigm shift in computing, an AI boom in which companies are spending billions to develop deep learning AI technology.”
The report highlights some interesting examples. Among these are Doxol, an AI system of drones and robots that monitors every stage of a construction process and can alert managers to any potential problems. Also of note is an Israeli start-up called 3DSignals, which uses sensors to track the sounds made by machinery with an algorithm able to alert managers to potential breakdowns or malfunctions before they happen. In health AI, Corti is an early recognition system for cardiac arrest that could provide first responders and medical staff with vital information to help save lives. MIT researchers have developed microscopic robotic devices the size of a single cell, called syncells. The cells could search out disease in a human’s bloodstream and be used to monitor conditions in oil and gas pipelines. The potential is mind-blowing.
But the mind-blowing dimensions of AI cut both ways. Put simply, there is no easy way in advance of identifying how new technologies based on autonomous systems will play out. There are certainly some stunning opportunities, with the potential to drastically reduce poverty, disease and war. But so too the risks are enormous, and this can be clearly discerned from the development of autonomous weapons systems. Moreover, the assessment of risk here must involve not only direct but also indirect threats. An example is that of insurgent groups tapping into communication satellites and aerial drone camera feeds in order to hack into military intelligence.
The debate over AI very much hinges upon the assessment of risk where the calculation is often murky, and sometimes incalculable. What is missing in much of the policy debate is precisely considered risk assessment. As recent governmental inquiries such as the UK Parliament’s Select Committee on Artificial Intelligence have recommended, there should be reporting to parliament and to the public about the risks of AI technologies, backed with informed analysis.
Professor Anthony Elliott is Dean of External Engagement and Executive Director of the Hawke EU Jean Monnet Centre of Excellence at the University of South Australia. He is a Senior Member of King’s College, Cambridge, and Super-Global Professor of Sociology (Visiting) at Keio University, Japan. He is the author and editor of some 40 books in sociology and social theory, translated in many languages.
Professor Elliott’s new book, The Culture of AI: Everyday Life and the Digital Revolution (https://www.routledge.com/The-Culture-of-AI-Everyday-Life-and-the-Digital-Revolution-1st-Edition/Elliott/p/book/9781138230057), is published by Routledge.