Assessing the Dangers: Emerging Military Technologies and Nuclear (In)Stability

An Arms Control Association Report

All Global Research articles can be read in 51 languages by activating the Translate Website button below the author’s name.

To receive Global Research’s Daily Newsletter (selected articles), click here.

Follow us on Instagram and Twitter and subscribe to our Telegram Channel. Feel free to repost and share widely Global Research articles.

***

Important Report by Michael Klare.

Preface and Executive summary below. Link to Complete Report

 

Preface

In commencing work on this document, I attended the Kalaris Intelligence Conference at Georgetown University in September 2019. Among the featured speakers at the conference, which focused on the military applications of artificial intelligence (AI),
was Lt. Gen. Jack Shanahan, then-director of the Pentagon’s Joint Artificial Intelligence Center (JAIC). After expounding for 30 minutes on the benefits of utilizing AI for military purposes, Shanahan opened the floor for questions. Quickly raising my hand, I inquired, “I understand your enthusiasm about exploiting the benefits of AI, but do you have any doubts about employing AI in computerized nuclear command-and-control systems?”

“You will find no stronger proponent of the integration of AI capabilities writ large into the Department of Defense,” he responded, “but there is one area where I pause, and it has to do with nuclear command and control.” Given the immaturity of technology today, “We have to be very careful. [You need to] give us a lot of time to test and evaluate.”

This dichotomy between the impulse to weaponize AI as rapidly as possible and the deep anxiety about the risks in doing so runs throughout the official discourse on what are called “emerging technologies”—which, in addition to artificial intelligence, include robotics, autonomy, cyber, and hypersonics. The military utilization of these technologies, as claimed by their proponents, will provide U.S. military forces with a significant advantage in future wars against other well- armed major powers. At the same time, analysts within and outside the defense establishment have warned about potentially catastrophic consequences arising from their indiscriminate use.

The same dichotomy arises, for example, in the Final Report of the National Security Commission on Artificial Intelligence, submitted to Congress and the White House in February 2021. “Our armed forces’ competitive military-technical advantage could be lost within the next decade if they do not accelerate the adoption of AI across their missions,” the report warns in its opening pages. To ensure this does not occur, the armed forces must “achieve a state of military AI readiness by 2025.” Much of the rest of the 756-page report focuses on proposals for achieving this status—many of which have since been incorporated into legislation or Pentagon directives. But once one reads deep into the report, they will find misgivings of the sort expressed by General Shanahan.

“While the Commission believes that properly designed, tested, and utilized AI-enabled and autonomous weapon systems will bring substantial military and even humanitarian benefit,” the report states, “the unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability.” In recognition of this danger, the report devoted four pages to a few modest steps for reducing the risk of such dangers, but buried them in a long list of recommendations for accelerating the weaponization of AI.

We at the Arms Control Association believe that appeals for the military utilization of emerging technologies and assessments of their destabilizing and escalatory dangers require a better balance. While not denying that certain advanced technologies may provide potential military benefits, this primer aims to balance the scales by way of a thorough and rigorous appraisal of the likely downsides of such utilization. In particular, it focuses on the threats to “strategic stability” posed by the military use of these technologies—that is, the risk that their use will result in the accidental, unintended, or premature use of nuclear weapons in a great-power crisis.

By publishing this report, we aim to better inform policymakers, journalists, educators, and members of the public about the race to weaponize emerging technologies and the dangers inherent in doing so. While the media and the U.S. Congress have devoted much attention to the purported benefits of exploiting cutting-edge technologies for military use, far less has been said about the risks involved. Hopefully, this primer will help overcome this imbalance by illuminating the many dangers inherent in the unconstrained exploitation of these technologies.

The primer is organized into six chapters, each based on an article that originally appeared in ACA’s flagship journal, Arms Control Today (ACT).

Chapter 1, “The Challenges of Emerging Technologies,” introduces the concept of “emerging technologies” and summarizes the debate over their utilization for military purposes and their impact on strategic stability. It highlights the centrality of artificial intelligence in many of these advances, particularly the development of autonomous or “unmanned” weapons systems. Chapter 1 also provides a brief overview of the four technologies given close examination in this report: autonomous weapons systems, hypersonic weapons, cyberweapons, and automated battlefield decision-making systems. This chapter is based on an article that first appeared in the December 2018 issue of ACT.

Chapter 2, “Autonomous Weapons Systems and the Laws of War,” focuses on lethal autonomous weapons systems. Devices of this sort combine combat platforms of varying sorts—planes, tanks, ships, and so on—with AI software enabling them to survey their surroundings, identify possible enemy targets, and, under certain predetermined conditions, independently decide to attack those targets. This chapter identifies the types of unmanned weapons now being developed and deployed by the major powers and discusses the moral and ethical objections about their use, as well as their potential conflict with the laws of war. This chapter is based on an article that first appeared in the March 2019 issue of ACT.

Chapter 3, “An ‘Arms Race in Speed’: Hypersonic Weapons and the Changing Calculus of Battle,” examines hypersonic weapons, or projectiles that fly at more than five times the speed of sound (Mach 5). Projectiles of this sort appeal to military officials given their speed and maneuverability, but also pose a threat to strategic stability by endangering key defensive assets of nuclear-armed states, possibly leading to the premature use of nuclear weapons. This chapter is based on an article that first appeared in the June 2019 issue of ACT.

Chapter 4, “Cyber Battles, Nuclear Outcomes? Dangerous New Pathways to Escalation,” looks at cyberspace and the dangers arising from the offensive use of cyberweapons in a great-power conflict. As the chapter suggests, a cyberattack on an adversary’s nuclear command, control, and communications systems during such a crisis might lead the target state to believe it faces an imminent nuclear attack and so prompt it to launch its own nuclear weapons. This chapter is based on an article that first appeared in the November 2019 issue of ACT.

Chapter 5, “’Skynet’ Revisited: The Dangerous Allure of Nuclear Command Automation,” considers the implications of automating combat decision- making systems. While such systems—such as the Pentagon’s Joint All-Domain Command-and- Control (JADC2) enterprise—could theoretically help battlefield commanders cope with the deluge of incoming information they are often confronted with, they might also usurp the role of humans in combat decision-making, leading to accidental or inadvertent escalation. This chapter is based on an article that first appeared in the April 2020 issue of ACT.

Finally, Chapter 6, “A Framework Strategy for Reducing the Escalatory Dangers of Emerging Technologies,” summarizes the analyses articulated in the first five chapters and provides an overarching strategy for curtailing the indiscriminate weaponization of emerging technologies. While no single approach can adequately meet a challenge of this magnitude, a constellation of targeted measures—ranging from awareness-raising to unilateral actions, Tracks 2 and 1.5 diplomacy, strategic stability talks, confidence-building measures, and formal agreements—could, in time, slow the pace of weaponization and bolster strategic stability. This chapter is based on an article that first appeared in the December 2020 issue of ACT.

As General Shanahan indicated in 2019, the initiation of nuclear combat represents the “ultimate human decision.” During the Cold War, the world’s top leaders came face-to-face with the risk of Armageddon, prompting significant arms control efforts to reduce that risk. Today, however, developments in geopolitics and technology are again increasing the danger of nuclear weapons use. We hope that this primer will help readers understand the technological aspects of this danger and spur them to advocate for reasonable limitations on the military use of destabilizing technologies.

Executive Summary

Increasingly in recent years, the major powers have sought to exploit advanced technologies— artificial intelligence (AI), autonomy, cyber, and hypersonics, among others—for military purposes, with potentially far-ranging, dangerous consequences. Similar to what occurred when chemical and nuclear technologies were first applied to warfare, many analysts believe that the military utilization of AI and other such “emerging technologies” will revolutionize warfare, making obsolete the weapons and the strategies of the past. In accordance with this outlook, the U.S. Department of Defense is allocating ever- increasing sums to research on these technologies and their application to military use, as are the militaries of the other major powers.

But even as the U.S. military and those of other countries accelerate the exploitation of new technologies for military use, many analysts have cautioned against proceeding with such haste until more is known about the inadvertent and hazardous consequences of doing so. Analysts worry, for example, that AI-enabled systems may fail in unpredictable ways, causing unintended human slaughter or uncontrolled escalation.

Of particular concern to arms control analysts is the potential impact of emerging technologies on “strategic stability,” or a condition in which nuclear- armed states eschew the first use of nuclear weapons in a crisis. The introduction of weapons employing AI and other emerging technologies could endanger strategic stability by blurring the distinction between conventional and nuclear attack, leading to the premature use of nuclear weapons.

Animated by such concerns, arms control advocates and citizen activists in many countries have sought to slow the weaponization of AI and other emerging technologies or to impose limits of various sorts on their battlefield employment. For example, state parties to the Convention on Certain Conventional Weapons (CCW) have considered proposals to ban the development and the deployment of lethal autonomous weapons systems—or “killer robots,” as they are termed by critics. Other approaches to the regulation of emerging technologies, including a variety of unilateral and multilateral measures, have also advanced in recent years.

AI and Autonomous Weapons Systems

Among the most prominent applications of emerging technologies to military use is the widespread introduction of autonomous weapons systems— devices that combine AI software with combat platforms of various sorts (ships, tanks, planes, and so on) to identify, track, and attack enemy targets on their own. Typically, these systems incorporate software that determines the parameters of their operation, such as the geographical space within which they can function and the types of target they may engage, and under what circumstances.

At present, each branch of the U.S. military, and the forces of the other major powers, are developing— and in some cases fielding—several families of autonomous combat systems, including unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), unmanned surface vessels (USVs), and unmanned undersea vessels (UUVs).

The U.S. Navy, for example, intends to employ a fleet of USVs and UUVs to conduct reconnaissance operations in contested areas and, if war breaks out, launch antiship and land-attack missiles against enemy targets. The U.S. Air Force has embraced a “loyal wingman” approach, whereby armed UAVs will help defend manned aircraft when flying in contested airspace by attacking enemy fighters. The U.S. Army seeks to reduce the dangers to its frontline troops by developing a family of robotic combat systems, including, eventually, a robotic tank. Russian and Chinese forces are developing and deploying unmanned systems with similar characteristics.

The development and the deployment of lethal autonomous weapons systems like these raise significant moral and legal challenges. To begin with, such devices are being empowered to employ lethal force against enemy targets, including human beings, without significant human oversight—moves that run counter to the widely-shared moral and religious principle that only humans can take the life of another human. Critics also contend that the weapons will never be able to abide by the laws of war and international humanitarian law, as spelled out in the Hague Conventions of 1899 and 1907 and the Geneva Convention and 1949. These statutes require that warring parties distinguish between combatants and non-combatants when conducting military operations and employ only as much force as required to achieve a specific military objective. Proponents of autonomous weapons claim that the systems will, in time, prove capable of making such distinctions in the heat of battle, but opponents insist that only humans possess this ability, and so all such devices should be banned.

In recognition of these dangers, a concerted effort has been undertaken under the aegis of the CCW to adopt an additional protocol prohibiting the deployment of lethal autonomous weapons systems. As the CCW operates by consensus and state parties have opposed such a measure, proponents of a ban are exploring other strategies for their prohibition, such as an international treaty under UN General Assembly auspices. Some members of the European Union have also proposed a non-binding code of conduct covering LAWS deployment, requiring continuous human supervision of their use in combat.

Hypersonic Weapons

Hypersonic weapons are usually defined as missiles than can travel at more than five times the speed of sound (Mach 5) and fly at lower altitudes than intercontinental ballistic missiles (ICBMs), which also fly at hypersonic speeds. At present, the United States, China, Russia, and several other countries are engaged in the development and fielding of two types of hypersonic weapons (both of which may carry either nuclear or conventional warheads): hypersonic glide vehicles (HGVs), unpowered projectiles that “glide” along the Earth’s outer atmosphere after being released from a booster rocket; and hypersonic cruise missiles (HCMs), which are powered by high-speed air-breathing engines, called “scramjets.”

Weapons of these types possess several capabilities that make them attractive to military officials. Due to their high speed and superior maneuverability, hypersonic missiles can be used early in a conflict to attack high-value enemy assets, such as air-defense radars, missile batteries, and command-and- control (C2) facilities. Since hypersonic missiles fly closer to the Earth than ICBMs and possess greater maneuverability, they may be capable of evading anti- missile systems designed to work against other types of offensive weapons.

All three major powers have explored similar types of hypersonic missiles, but their strategic calculations in doing so appear to vary: The United States currently seeks such weapons for use in a regional, non-nuclear conflict, whereas China and Russia appear to be emphasizing their use in nuclear as well as conventional applications.

The U.S. Air Force has undertaken the development of two such missiles for use in a regional context: the Air-Launched Rapid Response Weapon (ARRW), slated to be the first U.S. hypersonic weapon to enter service, and the hypersonic attack cruise missile (HACM). Concurrently, the U.S. Army and Navy have been working jointly on a common hypersonic boost-glide vehicle for use by both services, along with booster rockets to carry the HGV into the atmosphere. Russia has deployed the nuclear-armed Avangard HGV on a number of its SS-19 Stiletto ICBMs, while China has tested the Dongfeng-17 (DF-17), a medium-range ballistic missile fitted with a dual-capable (nuclear or conventional) HGV warhead.

While most of these weapons programs remain in the development or early deployment stage, their presence has already sparked concerns among policymakers and arms control advocates regarding their potential impact on strategic stability. Analysts worry, for example, that the use of hypersonic weapons early in a conventional engagement to subdue an adversary’s critical assets could be interpreted as the prelude to a nuclear first-strike, and so prompt the target state to launch its own nuclear munitions if unsure of its attacker’s intentions.

At present, there is no established venue in which officials of China, Russia, and the United States can meet to discuss formal limits on hypersonic weapons. The U.S.-Russia Strategic Stability Dialogue could serve as a possible forum for direct talks between government officials on these topics. While Washington paused the dialogue following Russia’s invasion of Ukraine, the two sides should return to the table as soon as circumstances allow. A U.S.-China strategic dialogue, if and when established, could address similar concerns.

Cyberattack and Nuclear C3

The cyberspace domain—while immensely valuable for a multitude of public, private, and commercial functions—has also proven to be an attractive arena for great-power competition, given the domain’s vulnerability to a wide variety of malicious and aggressive activities. These range from cyberespionage, or the theft of military secrets and technological data, to offensive actions intended to disable an enemy’s command, control, and communications (C3) systems, thereby degrading its ability to wage war successfully. Such operations might also be aimed at an adversary’s nuclear C3 (NC3) systems; in such a scenario, one side or the other—fearing that a nuclear exchange is imminent—could attempt to minimize its exposure to attack by disabling its adversary’s NC3 systems.

Analysts warn that any cyberattack on an adversary’s NC3 systems in the midst of a major crisis or conventional conflict could prove highly destabilizing. Upon detecting interference in its critical command systems, the target state might well conclude that an adversary had launched a pre-emptive nuclear strike against it, and so might launch its own nuclear weapons rather than risk their loss to the other side.

The widespread integration of conventional with nuclear C3 compounds these dangers. For reasons of economy and convenience, the major powers have chosen to rely on the same early-warning and communications links to serve both their nuclear and conventional forces—a phenomenon described by James Acton of the Carnegie Endowment for International Peace as “entanglement.” In the event of a great-power conflict, one side or the other might employ cyberweapons to disable the conventional C3 systems of its adversary in the opening stages of a nonnuclear assault, but its opponent—possibly fearing that its nuclear systems are the intended target— might launch its nuclear weapons prematurely.

The utilization of cyberspace for military purposes poses significant challenges for arms control. Existing means of inspection and verification cannot currently detect cyberweapons, whose very existence is often hard to prove. With the proliferation of cyberweapons creating new and severe threats to strategic stability, policymakers bear responsibility for developing strategies to prevent accidental and unintended escalation. Some of the most effective, stabilizing measures, analysts agree, would be U.S.-Russian and U.S.-Chinese bilateral agreements to abstain from cyberattacks on each other’s NC3 systems.

Automated Battlefield Decision-Making

With the introduction of new hypersonic weapons and other highly capable conventional weapons, the pace of warfare will likely increase and, as a result, exacerbate the pressure on battle commanders to make rapid combat decisions. In response, the militaries of the major powers plan to rely increasingly on AI- enabled battlefield decision-making systems to aid human commanders in processing vast amounts of data on enemy movements and identifying possible combat responses.

Within the U.S. military, the principal mechanism for undertaking the development of automated systems of this sort is the Joint All-Domain Command and Control (JADC2) program. Overseen by the Air Force under its Advanced Battlefield Management System, JADC2 is envisioned as a constellation of computers working together to collect sensor data from myriad platforms, organize the data into digestible chunks, and provide commanders with a menu of possible combat options. While JADC2 is initially intended for conventional operations, the program will eventually connect to the nation’s NC3 systems.

The increased automation of battlefield decision- making, especially given the likely integration of nuclear and conventional C3 systems, gives rise to numerous concerns. Many of these technologies are still in their infancy and prone to often unanticipated malfunctions. Skilled professionals can also fool, or “spoof,” AI-enabled systems, causing unintended and possibly dangerous outcomes. Furthermore, no matter how much is spent on cybersecurity, computer systems will always remain vulnerable to hacking by sophisticated adversaries.

Given these risks, Chinese, Russian, and U.S. policymakers should be leery of accelerating the automation of their C3 systems. Ideally, government officials and technical experts of the three countries should meet—presumably in a format akin to the U.S.-Russian Strategic Stability Dialogue—to consider limitations on the use of any automated decision- making devices with ties to nuclear command systems. Until meetings of this sort become feasible, experts from these countries should meet in neutral venues to identify the dangers inherent in reliance on such systems and explore various measures for their control.

An unmanned Boeing MQ-25 T1 Stingray test aircraft, left, refuels a manned F/A-18 Super Hornet, June 4, 2021, near MidAmerica Airport in Mascoutah, Illinois. (U.S. Navy photo courtesy of Boeing)

A Framework Strategy for Reducing the Escalatory Risks of Emerging Technologies

Military leaders of the major powers aim to exploit the perceived benefits of emerging technologies as rapidly as possible, in the belief that doing so will give them a combat advantage in future great-power conflicts. However, this drive to exploit emerging technologies for military use has accelerated at a much faster pace than efforts to assess the dangers they pose and to establish limits on their use. It is essential, then, to slow the pace of weaponizing these technologies, to carefully weigh the risks in doing so, and to adopt meaningful restraints on their military use.

Given the variety and the complexity of the technologies involved in this endeavor, no single overarching treaty or agreement will likely be able to institute restraints on all of the technologies involved. Thus, leaders of the relevant countries should focus on adopting a framework strategy, aimed at advancing an array of measures which, however specific their intended outcome, all contribute to the larger goal of preventing unintended escalation and enhancing strategic stability.

In devising and implementing such measures, policymakers can proceed in a step-by-step fashion, from more informal, non-binding measures to increasingly specific, binding agreements. The following proposed action steps are derived from the toolbox developed by arms control advocates over many years of practice and experimentation.

  • Awareness-Building: Efforts to educate policymakers and the general public about the risks posed by the unregulated military use of emerging technologies.
  • Track 2 and Track 1.5 Diplomacy: Discussions among scientists, engineers, and arms control experts from the major powers to identify the risks posed by emerging technologies and possible strategies for their control. “Track 2 diplomacy” of this sort can be expanded at some point to include governmental experts (“Track 1.5 diplomacy”).
  • Unilateral and Joint Initiatives: Steps taken by the major powers on their own or among groups of like-minded states to reduce the risks associated with emerging technologies in the absence of formal arms control agreements to this end.
  • Strategic Stability Talks: Discussions among senior officials of China, Russia, and the United States on the risks to strategic stability posed by the weaponization of certain emerging technologies and on joint measures to diminish these risks. These can be accompanied by confidence-building measures (CBMs), intended to build trust in implementing and verifying formal agreements in this area.
  • Bilateral and Multilateral Arrangements: Once the leaders of the major powers come to appreciate the escalatory risks posed by the weaponization of emerging technologies, it may be possible for them to reach accord on bilateral and multilateral arrangements intended to minimize these risks.

The failure to adopt such measures will allow for the application of cutting-edge technologies to military systems at an ever-increasing tempo, greatly magnifying the risks to world security. A more thorough understanding of the distinctive threats to strategic stability posed by certain destabilizing technologies and the imposition of restraints on their military use would go a long way toward reducing the risks of Armageddon.

Click here to read the full report.

*

Note to readers: Please click the share buttons above. Follow us on Instagram and Twitter and subscribe to our Telegram Channel. Feel free to repost and share widely Global Research articles.


Articles by: Michael T. Klare

Disclaimer: The contents of this article are of sole responsibility of the author(s). The Centre for Research on Globalization will not be responsible for any inaccurate or incorrect statement in this article. The Centre of Research on Globalization grants permission to cross-post Global Research articles on community internet sites as long the source and copyright are acknowledged together with a hyperlink to the original Global Research article. For publication of Global Research articles in print or other forms including commercial internet sites, contact: [email protected]

www.globalresearch.ca contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available to our readers under the provisions of "fair use" in an effort to advance a better understanding of political, economic and social issues. The material on this site is distributed without profit to those who have expressed a prior interest in receiving it for research and educational purposes. If you wish to use copyrighted material for purposes other than "fair use" you must request permission from the copyright owner.

For media inquiries: [email protected]