The United States and its allies have constantly been surprised by major cyber operations from their adversaries. This shouldn’t happen with such frequency: Alert cyber defenders know that such attacks are possible, and after each one there have been experts who said something like, “Well, this shouldn’t be a surprise. I’ve been saying for years it was bound to happen.”
Yet defenders have been routinely staggered, wheth-er by the 2007 cyber attacks directed at Estonia by Russia, the first incident of major national-security significance that caught the United States and NATO totally off guard; brazen Russian disruptive attacks against the Olympics or intrusions to interfere with the 2016 U.S. presidential election; or the reckless disruptions caused by NotP-etya or against the Colonial Pipeline, each of which ca-scaded globally into critic-al-infrastructures catastrophes.
If Russia does attack Ukraine in the coming weeks, the opening salvo is likely to be with offensive cyber capabilities. If so, this can be at the same time both expected and surprising. Even defenders with warning or who can extrapolate from past trends can be caught out by the sp-ecifics: the who, when, wh-ere, how, and how bad. Sur-prise — both tactical and strategic — is threaded th-roughout every aspect of cyber conflict, even more so than for conflict in the a-ir, on land, at sea, or in spa-ce. “The striking thing,” as Dick Betts wrote in his classic 1982 work on surprise attack, “is that in retrospect one can never quite understand” how such attacks ended up being quite so surprising. The attack at Pearl Harbor was hardly less of a surprise for being expected, and was presaged by Port Arthur in 1904 and Taranto in 1940. The Estonian defenders in 2007 knew weeks in adv-ance that Russian nationalists were plotting and yet the attack still had the power to shock. Cyber ope-rations depend on deception, so they are always surprising. Still, there are ways of mitigating the consequences. Cyber surprises are inevitable, but they need not be devastating.
Understanding Surprise in Cyberspace
Despite repeatedly being surprised, the United States has almost totally ignored surprise (and related concepts like reduced warning times) in its military policies. There are no references in the most recent Department of Defense Cyber Strategy, nor in earlier versions dating back to 2006. Military cyber doctrine has been similarly silent, other than unhelpfully saying that surprise is “germane,” and the term also remains absent in U.K. cyber strategies and key NATO cyber documents.
White House documents originally put substantial attention on improving warning. The 2003 National Strategy to Secure Cyberspace has dozens of references, including that “cyber attacks can burst onto the Nation’s networks with little or no warning and spread so fast that many victims never have a chance to hear the alarms. Even with forewarning, they likely would not have had the time, knowledge, or tools needed to protect themselves.” The 2009 Cyberspace Policy Review stressed that “the Federal government should impro-ve its ability to provide strategic warning of cyber intrusions and attacks to the President.” Improving wa-rning does not appear in the most recent strategy, from 2018, which has a much stronger focus on the “identification and attribution of malicious cyber activity” to improve both the imposition of costs and deterrence. Scholars have also covered the likelihood of surprise, including that “surprise probably plays a larger role in cyberspace than in any other domain.” And that in cyber conflict, one element of surprise — deception — is more central than in other kinds of warfare: “[A]ttackers who fail to be deceptive will find that the vulnerabilities on which they depend will readily be patched and access vectors will be closed.” Others focus less on the likelihood of surprise than on its impact. Since states hide their operations and capability, rivals must use intrusive cyber operations of their own to reduce the chance of surprise, even though such defensive espionage operations might themselves be mistaken for an offensive attack. Some have also unpacked the concept of a Cyber Pearl Harbor, which conjures up “compelling images of a ‘bolt from the blue’ surprise attack in American political and strategic culture.” Even though cyber attacks to date have been more about subversion or an intelligence c-ontest, in worsening geopolitical conditions: Condit-ions could entice an adversary to strike a similar, disabling blow … in the hope of a quick victory that presents America with an undesirable strategic fait accompli … . The burden of escalation would then shift to U.S. policymakers, who would have to choose war over political compromise.
A surprise cyber attack may not be meant to be debilitating, but rather intended as a sharp jab to see if the adversary is actually serious about the geopolitical issue at stake. An attacker could also conduct a sudden cyber raid to “keep the victim reeling when his plans dictate he should be reacting,” as Betts put it, or it could be a coup de main, where the attack is the main effort to settle the military question. Other states, of course, have a reciprocal fear of such attacks from the United States.
That such an attack has not happened yet — and is therefore not reflected in the historical evidence assessed by scholars — may have less to do with the cyber capabilities themselves than with the behavior of states during the relatively peaceful decades since the end of the Cold War. Findings like “competition is primarily an intelligence contest” may lead policymakers to underestimate how states might act in extremis during major crises. It is a very 1914 kind of argument to suppose that adversaries will not burn down the internet to hurt a despised rival, preserve a regime, or defend a core national interest.
Five Meanings of Cyber Surprise
Across this rich literature, however, “surprise” is a broad and ill-defined term with five related meanings. First, that deception, concealment, and trickery are central to almost all cyber operations. Second, that cyber capabilities can lend themselves to surprise because they can be unexpected or unforeseen as a new technological capability: an unexpected target; an unforeseen intensity, impact, or timing; unforeseen trends; and unexpected means. Third, that cyber conflict is frequently marked by being sudden or fast. Fourth, that cyber attacks are frequently audacious or daring. The last meaning, but the most important for stability, is that cyber capabilities are likely to be used to attack early in a conflict, even as an opening strike. This is, after all, central to the Cyber Pearl Harbor concept. Any theory or strategy that limits itself to a subset of these meanings of surprise is likely to fall short. Deception is more relevant to tactical cyber operations than to escalation and stability. The middle three (unexpected or unforeseen, sudden or fast, and audacious or daring) combine their effects to increase the danger of a spark, the first step toward a flashpoint of instability in which competition and conflict in cyberspace become the root causes of an acute geopolitical crisis. The last (early use in conflict) drives the escalation inversion, where cyber capabilities can accelerate the rush to war.
Surprise and Stability
Surprise in cyberspace is more important than in other domains for four reasons. The dynamics of cyber conflict lend themselves to surprising uses across all five meanings of surprise. They rely on deception and trickery; enable the unexpected and unforeseen; are sudden and fast, audacious and daring; and are especially useful early in a conflict. There are also significant first-use pressures, as they may present five ways to increase the danger of a security dilemma. Because cyber capabilities are not easily observable, it is extremely difficult to assess an adversary’s order of battle or relative strength, or to detect the equivalent of tanks massing on the border. Any particular attack might have an asymmetric impact, keeping defenders on perpetual and exhausting high alert.
There is also a nearly limitless realm of the possible. Cyber capabilities can bypass fielded military forces to affect a seemingly indefinite range of elements within an adversary’s society, economy, and psychology. The pace of innovation and dependence creates countless paths to attain technical surprise, as does the use of “existing weapons and forces in new and different ways.” Even more so than in other kinds of intelligence warning, “[t]here are few limits on what can be imagined,” so defenders have less chance of assessing where a blow may fall. Further, economies, societies, and militaries are increasingly interconnected and deeply cyber-dependent, so cyber capabilities offer an attacker more opportunities to believe that they can shift the correlation of forces in their favor. Some experts assert that “[c]yber attack does not threaten crippling surprise or existential risk,” as past attacks have only disrupted computer components that can be replaced relatively quickly. Yet this misses the scope of potential future cyber operations.
With the “internet of things” and cyber physical systems, attacks can now impact electrical grids, pipelines, and dams, objects made of concrete and steel, thereby boosting the potential impact of and opportunities for surprise attacks. The shift of the Internet as primarily a communications tool between people to increasingly a control mechanism for physical systems will result in operations and outcomes far outside the expectations of academic theories and government strategies which can still treat cyber-physical systems as an edge case. Even hard targets aren’t as hard as they might seem. Too difficult to directly hack FireEye or the Department of Justice? The Russians got both, plus thousands more, by going after SolarWinds.
States are also not limited to the capabilities that they patiently build over time, as with more conventional forces. Iran has rapidly expanded its capabilities so that Leon Panetta, the then-Secretary of Defense, told New York Times reporter Nicole Perlroth the Department of Defense “was astounded that Iran could develop” sophisticated capabilities. Iran’s Gulf rivals, such as the United Arab Emirates, have used contractors for a turn-key cyber command. States are more likely to deliver offensive surprises when their reach is not just limited by what they nurture, but also by what they can buy.
Lastly, there is high potential for mistake and miscalculation. The novelty of cyber attacks means that adversaries are more likely to misjudge how their operations will be perceived by the recipient. The attacker might believe that its attack is within the norms, justified because it is a tit-for-tat reprisal, or similar to a past operation that acted as a pressure release, defusing a crisis. A rival might shrug off being the target of a disruptive operation conducted in relative peacetime (“Ah, it’s just an intelligence contest,”) only to respond aggressively to a lesser attack in the midst of a crisis (“This could be the prelude to a surprise attack!”).
Cyber attacks are likely to flop (or worse, messily cascade) if they are not backed by meticulous intelligence, careful planning, and extensive testing — although these only reduce, rather than eliminate, the risks. Mistakes can take the target (and indeed, the attacker) by surprise, as happened with North Korea and Russia with WannaCry and NotPetya, respectively.
Surprise, Russia, and Ukraine
The Russian buildup in late 2021 for a possible invasion of Ukraine highlights the worrying role of cyber surprise in geopolitics. Just a year ago, the cybersecurity company Mandiant discovered an extensive Russian intrusion that had been propagated through network management software made by SolarWinds. Imagine for a moment that this intrusion were still undiscovered — or that the Russians have a similar but as yet undiscovered campaign, perhaps built using the massive log4j vulnerability.
If the Biden administration strongly backed Ukraine and attempted to force Russian President Vladimir Putin to back down, they would be doing so under a shocking misapprehension of the actual correlation of forces, completely underestimating the strength of Putin’s hand. The United States would be held at substantial risk, and no one would know. At least, no one outside of Russia. Even with existing malware functionality, the Russian espionage team could have rebooted all infected systems at a specific time, say just after a major Putin speech warning the United States to back down. With only a trivial amount of additional effort to their SolarWinds implant, the Russian team could have uploaded far more disruptive functionality to, for example, wipe all of the targets’ data, as the far-less-capable North Korea did to Sony and Iran to Saudi Aramco and Qatari Rasgas. If they had worked quickly (because of a greater risk of being detected), the Russian intelligence team could have wiped data not just from the 110 or so companies and agencies that they were actively exploiting for intelligence, but from all 18,000 entities that had downloaded the compromised SolarWinds software. The psychological shock to the public and to decision-makers might successfully coerce the United States into backing down. At the very least, the entire national security apparatus would be scrambling to discover how the attack was accomplished and could not be sure that more was not to come. Stumbling to respond to the disruption, the Biden administration might not be able to assist the Ukrainians until it was too late. The European Union may need even less cyber punishment to be convinced that its best interests are in remaining on the sidelines.
This worrying conclusion goes against almost every finding of academic research on coercion. Academics will accordingly be surprised too, as their work has not explored how cyber operations might have substantially more coercive value when executed in one-to-multitude attacks like SolarWinds that can affect thousands of targets at once, or during high-end crises like a land war in Europe, when states may be willing to take extreme risks to ensure that their cyber operations are coercive.
Military surprise in the initial phase of war usually succeeds, as Betts wrote, especially against the United States. Even so, there are steps that military professionals, intelligence officials, and policymakers could take to reduce the probability and impact of surprise. To reduce the probability of surprise, increased intelligence and warning are useful but not game-changers unless the intelligence is particularly exquisite, such as persistent access to adversaries’ home networks. Such dominance is expensive, fleeting, and anyway adds its own destabilizing pressure. More useful gains can be had by expanding defenders’ imaginations and experience through exercises, experimentation, and curiosity about future forms of cyber conflict.
U.S. and allied militaries should update policies, doctrine, and response plans with the expectation that an initial surprise attack is both likely to occur and likely to succeed. White House policies, for example, should again emphasize improved warning capabilities and not just post facto attribution and deterrence. And since nonstate actors “possess a greater range of capabilities than at any time in history,” and cyber security and technology companies routinely and agilely respond to critical threats, those strategies and doctrines should include further intelligence sharing and deal with surprise.
The United States and its cyber rivals strive to avoid surprise attacks while simultaneously maximizing their own ability to carry them out. This is solid policy in a stable geopolitical environment but exceptionally risky in an unstable one. Perhaps the only way to meaningfully slice though this dilemma is with slow, patient changes to give defenders more advantage over attackers, reducing first-strike incentives.
Jim Miller, a leading scholar-practitioner and a former undersecretary of defense for policy, believes that “a cyber surprise attack would be the least surprising of all the unsurprising ‘surprise attacks.’” Cyberspace lends itself to attacks that are deceptive, unexpected, sudden, audacious, and early in a conflict. While only so much of this can be mitigated, as Betts wrote 35 years ago: “Some other problems may be more important [than preparing for surprise attack] but most of them are better understood.” Even modest reductions in the probability and impact of surprise may yield outsized gains.