Now is the time to take control of AI before it takes control of us

Ranvir S. Nayar

This week, the UN’s Educational, Scientific and Cultural Organization announced that it would support 50 countries in their efforts to formulate legislation and policies to regulate the development and deployment of artificial intelligence.
This is the first time that a global, intergovernmental organization has stepped in and raised the crucial issue of how the development and use of AI will be regulated, and the critical importance of ensuring governments having a key say in these areas.
In the past decade or so we have seen AI emerge from the labs of big tech companies and begin to enter our homes, offices, and public spaces. Already, largely unknown to most of us, practically every aspect of our lives is touched by AI and its use in some way.
Certainly, a lot of this is harmless, at least in theory. It makes life a little bit easier for us as consumers and, of course, makes a lot of money for the companies that are developing and using AI.
The technology has already entered our lives and we live with it every moment of the day, for example when we order food online, read news on a website, buy clothes either from a bricks-and-mortar mall store or an e-commerce website, or watch a film or television series online.
In all of these transactions or interactions, and many more, AI is proactively used in a suggestive manner to help customers make decisions about what they want to read, watch, eat, or buy.
AI can also help doctors diagnose illnesses and plan treatments, or assist in the training of office workers through the use of simulated environments. There are many more areas in which ordinary people can be assisted by AI technology without any negative effects on their lives.
For businesses, AI is manna from heaven. It could help them save a lot of money in wages as AI takes over many basic tasks such as research, writing software code and even, in some cases, writing basic news reports.
All of these developments might leave a few million people jobless but many bosses might justify this by saying these are minor changes to the job market when viewed from the global perspective of a population of more than 8 billion people, and the existing evolution of the job market over the past few decades.
But it is not only about jobs, even if a few million of them become just the latest sacrifice on the global altar of shareholder returns and dividend incomes, which have become the operational mantras of the corporate world.
The threats posed by unchecked use of AI go far beyond the risk of unemployment. As UNESCO cautioned two years ago, there are huge ethical questions linked to the use of AI, mainly arising from the potential for the technology to embed bias, contribute to climate degradation, or threaten human rights. The organization warned, and rightly so, that the risks associated with AI have already started to compound existing inequalities in societies, resulting in further harm to already marginalized groups.
For example, what if AI is used within judicial systems in countries such as France or the US, which already suffer from deep racial divides. If AI models are used by courts and police to decide whether a suspect should be granted bail, for example, or a convict should be released on parole, a higher proportion of people from racial or ethnic minorities are likely to be denied justice because AI tools would be trained using datasets which themselves are the product of decades, if not centuries, of bias and injustice in these countries.
Similarly, there is a huge risk of AI being used to stir up panic or racial and ethnic hatred in volatile communities. Already, there have been instances of deep-fake technology used to create misleading images and videos that are then spread and go viral on social media, potentially posing extremely serious threats to law and order, and to the lives and well-being of minorities.
There are dozens of examples from every corner of the world each week highlighting the use of deep-fake and AI technology to spread fake news, stoke hatred, and provoke hate crimes against members of vulnerable communities.
Deep-fake tech can also be used to create tensions between countries, for example by making it seem like a neighbor is about to mount a military attack or cause harm to another state in some other way.
In short, the examples of potential and actual misuse and abuse of AI-enabled technologies are endless.
Given all this, the use of AI must be strictly limited only to the extent that is necessary for it to achieve legitimate goals, which must benefit society as a whole and not just line a few people’s pockets.
Similarly, governments must ensure that AI is not used to spread hatred or trample on the human rights of the most vulnerable communities, and that the data and privacy rights of all individuals are preserved.
Technology firms must ensure they use AI and data only in such a way that respects national and international laws and best practices. Every single business that uses AI, and the tools that utilize it, must be subject to regular audits so that governments can ensure they are complying with the rules and norms. It is also vital to ensure that AI always remains completely under human control and is not allowed to run amok on its own.
All of these actions, and more besides, are needed from governments and other authorities but, as is the case in every other sphere of life, few countries have all the tools, understanding, manpower and financial resources to develop and implement robust systems of governance for the development and deployment of AI.
It is, therefore, critical that nations join forces immediately and work together to ensure that AI rolls out in a manner that benefits society and does not become the master of our lives, in the way that we have already allowed big tech to become too powerful to be effectively regulated by governments.
It is imperative that we rein in the technology right now, rather than have it reign over us a few years from now.