A conference will so-on be held in Gen-eva, where control mechanisms for combat robots with artificial in-telligence will be discussed. It is proposed to prohibit autonomous systems that themselves cho-ose a target and destroy it without a human order. These technologies have shown their effectiveness in military conflicts, so they are attractive to criminals and terrorists. What are the real threats to war robots?
The sixth conference to review the operation of the Convention on Conventional Arms will be held in Geneva from 13 to 17 December under the auspices of the UN. This event is called an important milestone in international negotiations on lethal autonomous weapons systems (SAS) or, more simply, combat robots with artificial intelligence (AI). Every year the topic becomes more and more topical. Already, drones with autonomous control and the ability to kill without the participation of a human operator are being used on the battlefields.
Ahead of the Geneva meeting, the nonprofit The Future of Life Institute (FLI) has released a short video on the threat of rebellious artificial intelligence. In a few days, the video has gained hundreds of thousands of views. The authors of the video give a clear message that robots developed by the military may end up in the hands of terrorists and criminals. The video is made in the form of reports from the streets where drones are in charge. The video is replete with vivid headlines about fictional incidents associated with the release of new weapons technologies out of control.
The authors of the video warn that the use of AI ag-ainst people is already a re-ality. In one of the fragme-nts from a parked car, a dr-one fires at voters at a po-lling station. Similar wea-pons were used by Israeli intelligence services in the assassination of Iranian nuclear scientist Mohsen Fakhrizadeh last year. Then a bank robbery is shown by four-legged robots with rifles. Moreover, in appearance they resemble the robotic dogs of the Boston Dynamics company.
The FLI is confident that there is only one method against such a nightmare: this is the world-wide legally binding ban on autonomous weapons with artificial intelligence proposed by the International Committee of the Red Cross (ICRC). The first session of the UN Open-ended Group of Governmental Experts on the Inhuman Weapons Convention on SAS was held in Geneva in November 2017. And informal discussions on this issue have been going on since 2013. But there are still no results. The main problem is that many countries are pursuing such developments and it is almost impossible to oblige them all to abandon a promising type of weapon.
According to expert estimates, the United States, Russia, China, India, Great Britain and many other countries are primarily against the ban. They consider the existing international standards to be sufficient. At the same time, many terrorist and criminal groups are already making extensive use of armed drones. These are Mexican drug cartels, Islamic State * militants (an organization banned in Russia) and Houthi rebels in Yemen.
Associate Professor of the Department of Political Science and Sociology of the PRUE Plekhanov, Alexander Perendzhiev, a member of the Russian Officers’ expert council, believes that the emergence of intelligent military systems on the black market poses a huge threat to all of humanity. “When we talk about a very effective and secret weapon, there is a danger of its misuse due to the illegal military equipment market. It is this market that is part of the global shadow economy.
The appearance of “killer robots” there is no less a threat than the risks of nuclear weapons falling into the hands of terrorists. You can put an equal sign between these two threats”, – says Perendzhiev.
According to him, conflicts in different parts of the world are kindled to support the infrastructure of the criminal economy. This was the case during US military operations in Iraq, Libya, Syria, Afghanistan and many other places. And the upcoming conference in Geneva, the expert warned, could turn into an advertisement for martial artificial intelligence. He is convinced that it is necessary to create an international system to counter the leakage of modern military developments, to form a united world anti-terrorist front. “It is desirable to hold such events without wide publicity. In the meantime, it turns out that such discussions show the attractiveness of this weapon for criminal structures,” says Perendzhiev.
In his opinion, the adoption of a single convention on lethal autonomous weapons systems will not yield results if there are no institutions responsible for its implementation. The expert emphasizes that countries should now form their own structures to counter the penetration of innovative military equipment into the criminal arms markets. “Without these structures, we will not unite at the global level. Even if some international organization is created, it should still rely on national structures,” Alexander Perendzhiev is convinced.
Aleksey Firsov, head of the Platforma Center for Social Design, believes that in the foreseeable future, “the artificial intelligence system cannot set itself the task of attacking humanity, because it does not have such a motive.”
“A robot by itself cannot become a killer. There are two reasons for him to start killing. First, some malfunction should occur in its software, and it will begin to behave in an unpredictable way, which can lead to the death of people. And therefore, unmanned vehicles pose no less risk to human life than combat systems. The second reason for the transition of robots to the evil side is their use by criminal, shadowy groups,” the expert explained.
Mikhail Pashkin, chairman of the Moscow Interregional Trade Union of the Police and the National Guard, agrees with him. He believes that even if a murder or terrorist attack is committed by a combat robot, the presence of a motive for committing this crime will allow it to reach its organizers and perpetrators. He is convinced that the state will still have more advanced technologies at its disposal than those who violate the law. “This is a double-edged sword, the state monitors the emergence of new technologies from criminals. In the future, crimes will be committed mainly in the intellectual sphere, and physical violence is tracked rather quickly,” says Pashkin.
Even if the murder was committed by a drone, then “with the help of technology, it is possible to track who controlled it or programmed it.” “The task for the drone is set by a person, not artificial intelligence. In any case, they will look for who is profitable and in the end they will find a customer. Technologies are being improved on both sides – this has always been the case,” the interlocutor recalled.
Firsov is convinced that at the moment humanity is still in complete control of robotics and AI, but it is becoming more and more difficult to do this: “Artificial intelligence is not a risk now. But we see how the fragility of these systems and their autonomy are growing. There are a lot of digital platforms and there are problems of docking between them. At these junctions, failures and completely unpredictable phenomena can occur.”
Moreover, the interlocutor added, technologies are becoming more accessible and serious illegal Internet communities are being for-med that oppose themsel-ves to the law. “Therefore, there is a threat of invasion of these systems in order to reprogram them and use them against people. Such risks are poorly predictable and poorly manageable. This could be a key problem,” Firsov warned.