Trending
Wednesday, September 27, 2017

Are We Inviting Danger by Building Automated Weapons of War?


Over 100 CEOs of artificial intelligence and robotics firms recently signed an open letter warning that their work could be repurposed to build lethal autonomous weapons — “killer robots.” They argued that to build such weapons would be to open a “Pandora’s Box.” This could forever alter war.

Over 30 countries have or are developing armed drones, and with each successive generation, drones have more autonomy. Automation has long been used in weapons to help identify targets and maneuver missiles. 

But to date, humans have remained in control of deciding whether to use lethal force. Militaries have only used automated engagements in limited settings to defend against high-speed rockets and missiles. 

Advances in autonomous technology could change that. The same intelligence that allows self-driving cars to avoid pedestrians could allow future weapons that hunt and attack targets on their own.

For the past three years, countries have met through the United Nations to discuss lethal autonomous weapons. Over 60 non-governmental organizations have called for a treaty banning autonomous weapons. 

Yet most countries are hedging their bets. No major military powers have said they plan to build autonomous weapons, but few have taken them off the table.

There’s a certain irony in the CEOs of robotics and AI companies warning of the dangers of the very same technologies they themselves are building. They implore countries to “double their efforts” in international negotiations and warn that “we do not have long to act.” But if the situation is truly dire, couldn’t these companies slow their research to buy diplomats more time?

In reality, even if all of these companies stopped research, the field of AI would continue marching forward. The intelligence behind autonomous robots isn’t like stealth technology, which was created in secret defense labs and tightly controlled by the military. 

Autonomous technology is everywhere. Hobbyist drones that retail for a few hundred dollars can takeoff, land, follow moving objects and avoid obstacles all on their own. Elementary school students build robots in competitions. 

Even the Islamic State is getting in on the game, strapping bombs to small drones. There is no stopping AI. Robotics companies can’t easily band together to stop progress because it only takes one company to break the agreement and advance the technology. Besides, to ask companies to stop research would be to ask them to forgo innovations that could generate profits and save lives.

These same dynamics make constraining autonomous weapons internationally very difficult. Asking countries to sign a treaty banning a weapon that doesn’t yet exist means asking them to forgo a potentially useful tool to defend against threats and save lives. 

Moreover, the same problem of cheaters applies in the international arena, but the stakes are higher. Instead of lost profits, a nation might lose a war. History suggests that even when the international community widely condemns a weapon as inhumane — like chemical weapons — some despots will use them anyway. 

Treaties alone won’t prevent rogue regimes and terrorists from building autonomous weapons. If autonomous weapons led to a decisive advantage in war, a treaty that disarmed only those who care for the rule of law would be the worst of all possible worlds.

The letter’s signers likely understand this, which may be why the letter doesn’t call for a ban, a notable departure from a similar letter two years ago. Instead, the signatories ask countries at the United Nations to “find a way to protect us from all these dangers.” 

Banning or regulating emerging weapons technologies is easier said than done, though. Nations have tried to ban crossbows, firearms, surprise attacks by submarines, aerial attacks on cities and, in World War I, poison gas. All have failed.

And yet: Nations held back from using poison gas on the battlefields of World War II. The Cold War saw treaties banning chemical and biological weapons, using the environment as a weapon and placing nuclear weapons in space or on the seabed. 

The United States and the Soviet Union pulled back from neutron bombs and anti-satellite weapons even without formal treaties. Nuclear weapons have proliferated, but not as widely as many predicted. In more recent years, nations have passed bans on blinding lasers, landmines and cluster munitions.

Weapons are easier to ban when few countries have access to them, when they are widely seen as horrifying and when they provide little military benefits. It is extremely difficult to ban weapons that are seen as giving a decisive advantage, as nuclear weapons are. 

A major factor in what will happen with autonomous weapons, therefore, is how nations come to see the benefits and risks they pose.Autonomous weapons pose a classic security dilemma for countries. All countries may be better off without them, but mutual restraint requires cooperation. 

Last year, nations agreed to create a more formal Group of Governmental Experts to study the issue. The group will convene in November and, once again, nations will attempt to halt a potentially dangerous technology before it is used in war.

Paul ScharreScharre is a senior fellow at the Center for a New American Security and author of the forthcoming book Army of None: Autonomous Weapons and the Future of War.
_____________

International Organizations and Regulations:

  • The United Nations Office for Disarmament Affairs (UNODA): https://www.unoda.org/ disarmament-topics/lethal-autonomous-weapons-systems - Provides information on ongoing discussions and potential regulations for Lethal Autonomous Weapons Systems (LAWS).
  • Campaign to Stop Killer Robots: https://www.stopkillerrobots.org/ - A global coalition advocating for a ban on fully autonomous weapons.

Ethical and Legal Concerns:

  • The Future of Life Institute: https://futureoflife.org/ - A research institute focusing on existential risks posed by future technologies, including autonomous weapons.
  • The International Committee of the Red Cross (ICRC): - Explores ethical and legal concerns surrounding autonomous weapons and their potential violation of international humanitarian law.

Technical Considerations and Risks:

  • Carnegie Endowment for International Peace: [carnegieendowment.org] - Offers research and analysis on various international security issues, including the proliferation of autonomous weapons and potential technical risks.
  • The Bulletin of the Atomic Scientists: https://thebulletin.org/ - A science and security magazine publishing articles on nuclear weapons and other existential threats, including autonomous weapons.

  • Book: "Autonomy at War: Law, Ethics, and Disruption" by Neta C. Crawford (ISBN 9780190079363) - Examines the legal and ethical implications of autonomous weapons systems.

  • Article: Why We Need to Ban Killer Robots Now (theguardian.com) by The Guardian - An opinion piece highlighting the dangers of autonomous weapons.
_____________
  • Blogger Comments
  • Facebook Comments

0 facebook:

Post a Comment

Item Reviewed: Are We Inviting Danger by Building Automated Weapons of War? Rating: 5 Reviewed By: BUXONE