AI regulation battle is only just beginning

Joseph Dana

Given the pace of development in artificial intelligence in recent years, it is remarkable that the US has only just released clear regulations concerning the technology. At the end of October, President Joe Biden issued an executive order to ensure “safe, secure, and trustworthy artificial intelligence.” The directive sets out new standards for all matters of AI safety, including new privacy safeguards designed to protect consumers. While Congress has yet to enact comprehensive laws dictating the use and development of AI, the executive order is a much-needed step toward sensible regulation of this rapidly developing technology.
Casual observers might be surprised to learn that the US did not already have any such AI protections on the books. A gathering of 28 governments for the AI Safety Summit in the UK last week revealed that the rest of the world is even further behind. Those attending the forum, which was held at the historic former spy base Bletchley Park, managed to agree to work together on safety research to avert the “catastrophic harm” that could come from AI. The declaration, whose signatories include the US, China, the EU, Saudi Arabia and the UAE, was a rare diplomatic coup for the UK, but light on detail. The US used the event to brandish its own new guardrails as something that the rest of the world should follow.
You do not need a degree in computing to understand that AI is a crucial part of one of the most profound technological shifts humanity has experienced. AI has the power to change how we think and educate ourselves. It can change how we work and make certain jobs redundant. AI systems require massive amounts of data generally collected on the open Internet to deliver these results. Chances are that some of your data is being fed into large language models that power AI platforms, such as ChatGPT.
This is just the tip of the iceberg. AI is being deployed in Israel’s operations in Gaza to help make decisions of life and death. Israel’s Military Intelligence Directorate said that the military is using AI and other “automated” tools to “produce reliable targets quickly and accurately.” One unnamed senior officer said the new AI-powered tools are being used for the “first time to immediately provide ground forces in the Gaza Strip with updated information on targets to strike.”
This is a grave escalation in the use of AI, not just for Palestinians but also the international community. The technology being tested in Gaza will almost certainly be exported as part of Israel’s large and powerful weapons technology sector. Put simply, the AI algorithms used to attack Palestinian targets could soon crop up in other conflicts from Africa to South America.
Biden’s executive order specifically addresses issues related to AI safety, consumer protection, and privacy. The directive requires new safety assessments of new and existing AI platforms, equity and civil rights guidance, and research on AI’s impact on the labor market. Some AI companies will now be required to share safety test results with the US government. The Commerce Department has been directed to create guidance for AI watermarking and a cybersecurity program that can make AI tools that help identify flaws in critical software.
While the US and other Western countries have been slow to draft comprehensive AI regulations, there has been some movement in recent years. Earlier this year, the National Institute of Standards and Technology outlined a comprehensive AI risk management framework. The document became the basis for the Biden administration’s executive order. Critically, the administration has empowered the Commerce Department, which houses the NIST, to help implement aspects of the order.
The challenge now will be securing buy-in from leading US technology companies. Without their cooperation and a legal framework to punish companies that fail to follow the rules, Biden’s order will not amount to much.
There is still a lot of work to be done. Technology companies have largely been able to develop with little oversight over the past two decades. This is partially due to the interconnected world of tech, where firms have created new products or services outside the US. Amazon’s groundbreaking AWS cloud hosting technology, for example, was created and developed at the University of Cape Town in South Africa, far from the reach of US regulators.
With honest buy-in from leading companies, the Biden administration could seek more comprehensive laws and regulations. Direct government involvement in technology always runs the risk of stifling innovation. Yet, there is a clear opportunity for smaller countries with knowledge economies to step in. Countries such as Estonia and the UAE that have invested in their knowledge economies and have small populations — and regulatory environments — can follow Biden’s lead with AI safeguards. This would have a powerful effect in cities such as Dubai, where multinational tech companies have set up regional offices. Because there is less red tape in these smaller countries, AI regulations can be pushed through quickly and, perhaps more importantly, amended if they stifle development too aggressively.
Given the hyper-connected world of technology development, the international community cannot wait for larger countries or blocs such as the US and EU to push through legislation first. Instead, new markets that have their tech economies to consider should push ahead with regulations that work for their needs.
The development of AI technology is happening at a remarkable pace. Because it is so essential to the overall technology sector, we do not have the luxury of waiting for world leaders to act first. It is time to lead by example, and AI regulations are an ideal place to start.