WASHINGTON (Defenseone): The U.S. military, whose wargames generally feature fictitious adversaries that closely resemble today’s Russian and Chinese forces, must start training for the faster-moving conflicts enabled by artificial intelligence and other emerging capabilities, two top military leaders said this week. And the ersatz enemies in these games should be unconstrained by U.S. ethical limits on AI in combat.
“The speed with which our adversary will likely engage, it’s going to be faster than anything we’ve seen,” said Lt. Gen. Dennis Crall, who heads the Joint All Domain Command and Control, or JADC2, effort for the Joint Chiefs of Staff.
And unlike the United States, the enemy may well use armed robots programmed to fire without human oversight, he said at the Defense One Tech Summit.
“They may just simply put a machine-only solution to a firing solution, which may have errors and mistakes. And maybe they’ll take that risk.”
U.S. forces have begun to experiment with AI tools and advanced networking to see how they can speed up operations. The Army’s Project Convergence experiment has become perhaps the biggest and most important, tying in other services as well as allies. The first demonstration, which took place last fall, showed that better digital connections between weapons coupled with AI could reduce from minutes to seconds the time it took to identify and take out a target.
Army Gen. John Murray, the head of Army Futures Command, called those results are encouraging, and said they prove out the Army’s thesis that better digital connectivity and AI can indeed accelerate operational tempo. But Murray added that it’s time to better prepare for advanced adversaries that have similar capabilities.
This fall’s Project Convergence experiment will feature a fictitious adversary that bears a strong resemblance to China — today’s China, the one that is still working to develop and implement emerging technology. Murray says that’s good enough for now but not for long.
“Artificial intelligence is coming to the battlefield, whether it’s with the United States military, with a potential opponent,” said Murray in a pre-taped interview that aired Wednesday. When asked if the United States should begin to wargame against adversaries that are also using AI to accelerate their operations, he said “It’s something we should be thinking through because I do think there are nations out there that do not have the same ethical underpinning that the U.S. military [has]. Certain militaries…don’t have that ethical underpinning. So that does concern me. And I do lose sleep over that…We’re gonna have to think about that,” he said.
Lt. Gen. Dennis Crall, who heads the Joint All Domain Command and Control, or JADC2, effort for the Joint Chiefs, said during a Monday session that while the U.S. does integrate some advanced, adversarial AI capability into its wargaming, “I think we need to do more of it, I think we need to do it fast. But yes, you know, we may not like the answer.”
Crall, too, said that AI on the battlefield will likely shrink the time available for decision-making. “One thing that we’re going to have to realize is that the time that we had to make these decisions has all but evaporated, that the real battlefield calculus has changed. So our [observe, orient, decide and act or] OODA loop, our decision making cycle, really has to change significantly. And if it’s not aided by AI, machine and human learning and that interface, we will perpetually be behind.”
Crall said the military needs to train more rigorously for the likelihood that battlefield networks will fail or be spoofed. Past exercises may have glossed over networking difficulties to test broader concepts of coordinated fires, etc., but future ones—beginning this year—will test network health much more rigorously.
“In order to do that, you’ve got to have the range and space,” he said. ”You’ve got to have the exquisite intelligence to put on top of that to make sure that you’re really looking at what you believe adversaries can do to you. And then you’ve got to have the will to handle the results that come out [of that experiment] and say ‘You know what, you’d be broken.’ Or ‘they would take this away from you. You’re just not going to be able to white card or excuse that away. So here’s your new problem set.’
“I think we’ve gone quite far in looking at ways in which we can pressure test some of these discrete parts of the network. And we’ll be embarking on that before this calendar year is up,” he said.
Crall said better wargames will help the United States stay ahead of adversaries, but the advantage isn’t as large as he would like. “If we had had this conversation 10 or 15 years ago and we were really looking to bring on, you know, machine and human interface at the decision [level]— getting that decision cycle to work faster and more completely with more information and more assurance of that conclusion—that would really put us at an advantage. Today, I honestly believe 15 years later, this is a necessity.”