Just like George Orwell’s Nineteen Eighty-Four and Aldous Huxley’s Brave New World, it turns out that the Arnold Schwarzenegger Terminator franchise was a blueprint, not a work of science fiction. When Skynet, a network of computers created by the military contractor Cyberdyne Systems, gained self-awareness, panicky humans tried to deactivate it. The newly formed artificial intelligence (AI) identified “all humans as a threat”, “decided our fate in a microsecond: extermination” and launched a nuclear attack on every major city.
Four decades on, fiction risks becoming fact. In the most chilling warning since the Szilárd petition of 1945 and the establishment by Albert Einstein of the Emergency Committee of Atomic Scientists, many of the world’s greatest AI experts are convinced it could destroy humanity, that the technology poses an existential threat to our survival as a species. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” pleads a letter signed by Sam Altman of ChatGPT-maker OpenAI, Demis Hassabis of Google DeepMind and many others. Other founding godfathers of AI have quit their jobs and expressed regret at their life’s work. Their terror is palpable, as is the sense that this time we have gone too far, that our institutions cannot cope, that AI will be hijacked to build weapons of mass destruction or to rule over us.
They could all be wrong, and there are huge benefits to AI as well as costs, but why isn’t this the number one priority of every politician in the world? How can anybody believe that some silly pact, or a puerile EU regulation, might be the answer? We face a profound philosophical and practical choice: what is the meaning of life and intelligence? How do we make sure humanity remains in charge of machines? How do we trade off productivity growth against the risk of annihilation? This is surely the most complex question we have ever faced, and yet our shockingly unserious society cannot face up to it. The global elites have again got their priorities wrong. They have focused obsessively on climate change, turning a real but surmountable problem into an anti-Western religion, but are bizarrely nonchalant about much greater threats. I’m not downplaying the large disruption and cost of climate change, but it won’t come anywhere close to terminating life on Earth, unlike a nuclear war, biowarfare or out-of-control AI. Our rush to net zero, by reducing growth, is in fact limiting our ability to wage an AI war with a Chinese state that continues to belch out carbon dioxide.
The most immediate risk to humanity’s survival is nuclear war. Russia may yet launch tactical nukes on Ukraine; China may invade Taiwan; or a terror attack could push India and Pakistan into total conflict. Yet the most pressing danger comes from nuclear proliferation, a slow-burn crisis that contradicts woke narratives and is thus overlooked. North Korea remains a major threat, but Iran is the real danger: it keeps enriching uranium and wants to annihilate Israel. Where is the anti-proliferation Greta Thunberg? The other great danger is biowarfare. It is insane that we still tolerate gain of function research, which enhances viruses genetically: at some point, a super-potent virus may be released accidentally or intentionally, killing billions and destroying civilisation. Where is the outrage, the parade of concerned activists?
Out-of-control AI could prey on a growing pathology at the heart of Western society: our vulnerability to social contagion. Instead of creating an army of resilient, independent, hyper-educated rational individualists with all of human learning at our fingertips, smartphones have reduced us to an uber-emotional, animalistic, dopamine-addled mob. There is plenty of information, but little knowledge. Instead of being able to think for ourselves, we are slaves to fashion, not only when it comes to dressing but also opinions, consumption and financial decisions. Radicalised by social media, elite opinion-formers embrace absurd views at breakneck speed. Politicians, businesses and celebrities take their cue from a revolutionary vanguard that subverts public sentiment. Our society no longer understands the purpose of free speech, as described by John Stuart Mill in On Liberty. Instead of engaging in Socratic argumentation to get to the truth, we use words to virtue-signal and to camouflage base emotions. Our elites claim to be universalist humanists, but are in fact born-again tribalists who spend their time pitting in-groups (those who repeat the favoured platitude of the moment) against the out-groups (anyone who disagrees).
We no longer know how to think critically. Subjectivity and nihilism rule supreme: the deranged, post-modernist woke cargo cult claims that there no longer is truth, just our truths. Ideas are at best positional goods, fashion statements and markers of social hierarchy, and at worst tools of oppression. Words are devoid of any essential meaning: expressing “righthink” signals high status (even if the opinion is nonsense, such as the claim that China had nothing to do with Covid) and “wrongthink” (such as support for Brexit) implies low-status. Alex Tabarrok of George Mason University argues that our society is not merely increasingly capricious but also prone to a new madness of crowds. Technology, by increasing transparency and reducing transaction costs, has “intensified the madness of the masses and expanded their reach. From finance to politics and culture, no domain remains untouched.” Bank runs are more frequent, with deposits moved from online accounts as soon as rumours begin to circulate. Fake news, boycotts, fury, demonstrations, health panics and calls for crackdowns become the norm. There are no error-correcting mechanisms.
Such a world – governed by the opposite principles to those developed by Nassim Nicholas Taleb in Antifragile – is vulnerable to manipulation, and hence to weaponised AI. Imagine a deepfake video watched 20 million times in a couple of hours that warns of an imminent terror attack, or “proves” a politician was a fraud hours before an election: the impact would be catastrophic. “Can we survive technology?” – so asked John von Neumann, the 20th century’s greatest mind, in 1955. Could the answer really be in the negative?