FRENCH REPORT. UNITED NATIONS. LETHAL AUTONOMOUS WEAPONS SYSTEMS

The Conference on Disarmament today heard a statement by its Acting Secretary-General, Michael Møller, as well as presentations…by France on the work of the informal expert meeting on lethal autonomous weapons systems…

FRANCE, presenting information on the informal expert meeting on lethal autonomous weapons systems within the context of the Convention on Conventional Weapons, said there were 30 statements in the general debate, 24 statements in the closing debate, and many statements made during the technical sessions.  This had demonstrated the interest in the emerging topic of lethal autonomous weapons systems.  Many delegations were only just starting to work on this topic and the meetings had allowed an in-depth discussion about it.  The atmosphere was very constructive, showing the desire of all to learn more about this topic.  There were noted debates on the concepts of autonomy, human control and responsibility.  More in-depth consideration of these issues was needed.  There was also a debate on international humanitarian law.  The importance of the Convention on Conventional Weapons was underlined.  This was the first meeting that allowed participants to exchange views on technical, judicial, ethical and military aspects of lethal autonomous weapons systems.  The report reflected the discussions in an objective manner and did not contain any recommendations.

 

UNITED NATIONS. LETHAL AUTONOMOUS WEAPONS SYSTEMS. STATEMENTS COUNTRIES

Source: UNOG UN
DC14/018E

United Nations. Lethal Autonomous Weapons Systems. Statements Countries

At the 2013 CCW Meeting of High Contracting Parties, a new mandate on lethal autonomous weapons systems (LAWS) was agreed on. The mandate states:

“…Chairperson will convene in 2014 a four-day informal Meeting of Experts, from 13 to 16 May 2014, to discuss the questions related to emerging technologies in the area of lethal autonomous weapons systems, in the context of the objectives and purposes of the Convention. He will, under his own responsibility, submit a report to the 2014 Meeting of the High Contracting Parties to the Convention, objectively reflecting the discussions held.”

The Meeting of Experts was chaired by Ambassador Jean-Hugues Simon-Michel of France.

 

Austria

“…It is crucial, in our view, that any use of a weapon in armed conflict complies with international humanitarian law. Among roboticists and lawyers alike, there is serious doubt that autonomous weapons can ever be programmed in a way to guarantee this compliance. One further consideration: While in the case of a war crime perpetrated by a human actor legal responsibility can be, at least in principle, established, it is fundamentally unclear how to resolve the issue once the autonomous decision of a machine is at the root of the crime…”

 Original

Brazil

Statement By H.E Ambasaddor Pedro Motta Pinto Coelho. Permanent Representative of Brazil to the Conference on Disarment CCW Informal Expert Meeting Lethal Autonomous Systems 13 May 2014

“…The increasing amount of money spent by governments and private sector in reserches in autonomus systems is an unequivocal indicate of a technological trend that cannot be ignored. Many military experts support the idea of usin this new technology in order to maximize the compliance with the IHL, reduce the number of human casualties (combatants and non-combatants) and decrease their military budgets. Other experts sustrain that the use of lethal autonomous systems would imply a “deshumanization of warfare”. They point out that ethical and moral standards require meaningfull human supervision of decisions to take life. They also emphasize that key issues must be urgently addressed, such as the level of automation we aim to achieve and what functions of these lethal systems should not be allowed to operate autonomously...”

Original

Croatia

Opening Statement of the Republic of Croatia. CCW Informal Meeting of Experts on Lethal Autonomous Weapons Systems. Geneva 13 May 2014

“…CCW was always seen as a forum for open-minded discussions and exchange of views on disarmameny issues at the multilateral level. Since we all agree that human suffering resultingg from an armed conflict cannot and must not leave the international community indifferent, we would like to call upon all High Contracting Parties to use this unique opportunity of having a number of globally renowned LAWS experts among us and to give their contribution in this week’s discussions…”

Original

Czech Republic

“…If it has been quite difficult to keep a balance between humanitarian concerns and security requirements in the past, it will be even bigger challenge to strike it within the context of sophisticated autonomous weapons of the future. This is only one of the reasons why we think it is important to start work on the needs of protecting civilians and combatants from possible effects of LAWS well in advance before they will be developed. The Czech Republic, similarly as many other state parties to CCW, does not have a firm coordinated national position on or an approach towards many aspects of LAWS. Views that might be expressed in 2 national presentations, provided by our experts from the Czech Defense University will not represent a national position on any aspect of research, development or production or of future use of LAWS. Our hope is however, that we could start to build it on results of this meeting…”

Original

 Ecuador

“…El Ecuador considera inaceptable e inadmisible que decisiones fundamentales sobre la vida o muerte de los seres humanos sean asigndas a las armas letales autónomas. Los Estados debemos tomar acciones para prevenir la creación, desarrollo y detener las inversiones en el campo de las armas letales completamente autónomas, a través de normas y leyes nacionales que las prohíban y un protocolo internacional que prohiba su creación, desarrollo y uso…”

Original

Egypt

IStatement of the Arab Republic of Egypt at the Meeting of Experts on lethal autonomous weapons  13-16 May 2014. By: Ambassador Dr. Walid M. Abdelnasser Permanent  Representative of the Arab Republic of Egypt to the United Nations and other International Organization in Geneva  Geneva, May 13th, 2014

“…We hope that this informal meeting of experts on this issue works as an eye-opener on a very important and challenging development in the course of weaponry research and development and the relevant considerations in this regard, particularly with reference to the issue of the possible ramifications on the value of human lives, the calculation of the cost of war, as well as the possibility of the acquisition of this weapon by terrorist and organized crime networks. This should lead to a prohibition on acquisition, research and development, testing, deployment, transfer and use.
Until such result is achieved, we support calls to pose a moratorium on the development of such lethal technology in order to allow serious and meaningful international engagement with this issue. As military robotics gain more and more autonomy, the ethical questions involved will become even more complex. It might be too late after robotics have been fully developed to work on an appropriate response…”

Original

France

Ministere Des Affaires Etrangeres. Convention sur Certaines Armes Classiques. Reunion informelle d’ experts sur les systemes d’armes létaux autonomes (SALA). Genéve, 13-16 mai 201r. Intervention Genérale.

“…A défaut de conclure sur tous les thémes qui seront discutés, nos devons au moins avoir pour ambition de rechercher une comprehension commune de ce que nous entendons par “systéme” d’arme létal autonome”. A cet égard deux elements semblent essentiels:

-nou parlons de tecnologies emergentes, en cours de developpment, et par consequent non encore utilisées dans des systemes d’armes existants;

-nous parlons de systemes autonomes, et non de systemes automatisés ou télé-óperes; ils impliquent donc une absence de supervision humaine. La délégation francaise reviendra plus en détail sur cette question dans ses interventions ultérieures”

Original

Germany

CCW EXPERT MEETING LETHAL AUTONOMOUS WEAPON SYSTEMS Geneva, 13 – 16 May 2014 General Statement by Germany

“…We firmly believe that there should be a common understanding in the international community that it is indispensable to maintain meaningful human control over the decision to kill another human being. We cannot take humans out of the loop. We do believe that the principle of human control is already implicitly inherent to international humanitarian law which, as said before, remains our binding guiding line also with regard to new weapon systems. And we cannot see any reason why technological developments should all of a sudden suspend the validity of the principle of human control. Therefore, we suggest that in the discussion about the definition and legal evaluation of lethal autonomous weapon systems we should also talk about what we as an international community understand as meaningful human control and declare it an indispensable principle of international humanitarian law…”

Original

Holy See

Statement by H.E Archbishop Silvano M. Tomasi. Permanent Representative of the Holy See to the United Nations and Other International Organizations in Geneve at the meeting of Experts on Lethal autonomous weapons systems of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which may be deemed to be excessively injurious or to Have Indiscriminate Effects 13 May 2014

“…For the Holy See the fundamental questions is the follolling: Can machines -well-programmed with highly sophisticated algorithms to make decisions on the battlefield which seek to comply with IHL-truly replace humans in decisions over life and death?

The answer is no. Humans must not be taken out of the loop over decisions regarding life and death for other human beings. Meaningful human intervention over such decisions must always be present…”

Original

India

Permanent Mission of India to the Conference on Disarmament. Statement by Ambassador D.B. Venkatesh Varma. Permanent Representative of India to the Conference on Disarmament at the CCW Experts Meeting on Lethal Autonomous Weapons Systems. Geneva, May 13, 2014

“…We see current approaches as falling into two categories. The first is the view that a fresh look is needed on whether lethal autonomous weapon systems meet the criteria of international law and international humanitarian law, especially with regard to the principles of distinction, proportionality and precaution, and suggesting a preemptive ban on the research, production and use of LAWS or at least a moratorium until such time there is clarity on the overall implications. The other view is that there is a spectrum of autonomy inbuilt into existing weapons systems and that a prohibition on LAWS is either premature, unnecessary or unenforceable”

Original

Ireland

“I note that the press statement they issued yesterday focussed on the importance of ‘human control over the use of force’ and this seems to me to be a very sensible focus for these consultations and for whatever subsequent action we decide to take to build on these consultations. The definition of control, of course, is important in itself, in the context of ensuring that control is effective and not merely nominal”

Original

Italy

CCW Meeting of Experts on Lethal Autonomous Weapons Systems Geneva (13-16 May). Statement by Ambassador Vinicio Mati. Permanent Representative of Italy to the Conference on Disarmament

“…We are convinced that the CCW has the merit to address not only the humanitarian concerns posed by existing weapons but also to prevent the development of new types of weapons that would be unacceptable under basic International Humanitarian Law principles. Therefore, we deem it very important that, within this framework, we will be abble to address new potential threats appearing on the horizon…”

Original

Japan

Delegation of Japan to the Conference on Disarmament.Statement by H.E Ambassador Toshio SANO Permanent Representative of Japan to the Conference on Disarmament  Experts Meeting on Lethal Autonomous Weapons Systems to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excesively Injurious or to Have Indiscriminate Effects 13 May 2014, Geneva

“…as we deal with the emerging technology of LAWS, we are facing a challenge of defining LAWS. Although an agreement on the definition at this informal meeting is not likely, we believe it is imperative to develop a common understanding about what we perceive as LAWS in order to advance discussions.

In this regard, I would like to point out that if consider LAWS as “fully” lethal autonomous weapons systems, which once activated, can effectively select and engage a target without human intervention, we believe, at this stage, it is questionable such autonomous weapons could comply with international humanitarian law, and therefore, should be highlighted in our discussion. Also, while we may continue researching and developing non-lethal autonomous technology for defense purposes, we are not convinced of the need to develop “fully” lethal autonomous weapon systems which is completely out control of human intervention”

Original

Mexico

Mariana SALAZAR-ALBORNOZ

“…Coincidimos, por ello, en la necesidad de observar el desarrollo de nuevas tecnologías de armas en el contexto del respeto al derechohumano a la vida,y expresamos nuestra preocupación por la aparición de sistemas de armas autónomas letales que tengan el poder de decidir arbitrariamente sobre la vida o la muerte de los seres humanos. Reafirmamos que los Estados tienen la obligación de proteger y defender el derecho humano a la vid, y esa obligación no puede delegarse bajo ninguna circunstancia.

Además, reconocemos la aplicabilidad, en esta materia, delos límites y obligaciones preventivas previstas en el derecho internacional humanitario….”

Original

New Zeland

Informal Meeting of Experts on Lethal Autonomous Weapons Systems. Statement by Joseph Ballard. Deputy Permanent Representative to the Conference on Disarmament 13 May 2014

“…Developments in the field of artificial intelligence hold considerable potential for impreving our lives. But, it is clear that, very importantly, they also offer the prespect of affecting the conduct of armed conflic and in doing so can present a number of benefits and risks. New Zealand, like many other High Contracting Parties, is beginning to consider the implications of these developments. Much will depend on how, collectively, we frame our discussions, on the definitions we use, and on the appropriate engagement of all relevant actors…”

Original

Norway

CCW Meeting of Experts on Lethal Autonomous Weapons Systems 13-16 May 2014

“…Before launching an attack, a military commander is also requires to make a proportionality assessment between the incidetal harm which the attack may be expected to cause, and consider it against the military advantage anticipated. Launching an attack which may be expected to cause excessive incidental loss of civilian life or civilian objects is prohibited and must be halted or cancelled. An important question, is therefore whether a fully autonomous weapons could be programmed to make such a complicated analysis and judgement without human intervention?…”

Original

Republic of Korea

“…I would also like to emphasize that the Republic ok Korea continues to pay due attention to ethical considerations surrounding the use of robot systems for civil and military purposes. Korea is working to enact an ethics charter on the commercial service of robotic technology in accordance with the National Plan on Intelligent Robots. I expect the charter to contain provision on ethical values and the code of conduct regarding the development, manufacture and use of robots…”

Original

South Africa

CCW State ment by South Africa at the Meetin of Experts on Lethal Autonomous Weapons Sistems (LAWS), Geneva, Switzerland 2014.

“…The development of LAWS poses serious questions and there are many issues on which clarity is required. This includes definitional certainty as to the notion of autonomous and semi-autonomous weapons systems. While fully autonomous weapon systems are not being used currently, the future use of such weapon systems raises a whole range of issues to be considered. Of primary concem to my delegation are the humanitarian implications of their use and related ethical considerations. One of the key questions, in this regard thar should be of concern to all of us is whether these new technologies of warfare would be compliant with the rules of International Humanitarian Law, including those of distinction, proportionality and military necessity, as well as their potential impact on human rights.

Original

Spain

Intervención de la Delegación Española Renión de Expertos sobre Sistemas de Armas Autónomos Letales, Ginebra, 13 de mayo de 2014

“…Creemos por ello que toda regulación futura debe pasar, de manera ineludible, por una fase previa de reflexión y definición, lo que, en el caso de las tecnologías emergentes, entraña una especial dificultad. Por la misma razón, veríamos como algo prematuro cualquier propuesta de moratoria sin antes definir, que califican este tipo de sistemas, plantean no pocos interrogantes…”

Original

Sweden

Remasks by Sweden at the Expert Meeting on LAWS in the CCW on 13-05-2014 (General Debate)

“…while it is true that many systems with various degrees of automation are being used or developed by states, it is not clear to us that this entails a move toward systems that would give full combat autonomy to machines…

A difficult issue is the threshold at which a weapon should be considered ‘autonomous’. Machine automation/autonomy exists on a continuum. An autonomous weapon implies one that is fully outside the control of a human. We very much doubt that this is a desirable development for a military forces. As a starting point, Sweden believes that when it comes to decisions on the use of fore against persons, human should never be “out of the loop…”

Original

Switzerland

“…En premier lieu, une meilleure compréhension des développements technologiques liés à ces nouveaux systèmes est nécessaire….Les concepts tels que l’autonomie et l’étendue du spectre entre automatisation et autonomie doivent être clarifiés. Il serait intéressant d’utiliser des exemples concrets afin de déterminer les usages désirables, légaux et acceptables de systèmes aux fonctions autonomes, ainsi que d’identifier avec précision quels aspects des armes autonomes soulèvent des préoccupations. Une question centrale doit être posée, à savoir la capacité des machines à comprendre pleinement l’environnement dans lequel elles évoluent, à évaluer les risques ou à réaliser les évaluations qualitatives requises par le Droit International Humanitaire (DIH). D’autres questions tout autant critiques devront être traitées, comme l’utilisation civile de ces technologies et leur possibilité de double usage.

Deuxièmement, nous devons prendre en compte la dimension éthique de la ilitarisation de technologies de plus en plus autonomes. Il parait évident que le potentiel développement et emploi de systèmes d’armes létaux autonomes, à même de sélectionner et d’attaquer des cibles sans contrôle humain effectif (ou « meaningful human control »), soulèvent d’importantes préoccupations éthiques.

En troisième lieu, la question doit être approchée au niveau militaire et opérationnel. Nous devons identifier l’origine de l’intérêt militaire pour ces technologies, l’ampleur des avantages qui en sont attendus et les risques qui en découlent…”

Original

United Kingdom

“…The subject of autonomous weapons systems is a complex one, and it is very useful for all concerned to have the opportunity here to depen understanding of the key themes. The agenda which France ahs prepared certainly includes consideration of the most relevant issues…”

Original

United States

“…First, this important discussion is just beginning and we believe considerable work still needs to be to establish a common baseline of understanding among states. Too often, the phrase “lethal autonomous weapons systems” appears still to evoke the idea of a humanoid machine independently selecting targets for engagement and operating in a dynamic and complex urban environment

Second, it follows from the fact that we are  indeed at such an early stage or our discussions that the United States believes it is premature to determine where these discussions might or should lead. In our view, it is enough for now for us collectively to acknowledge the value in discussing lethal autonomous weapons systems in the CCW, a forum focused on international humanitarian law, which is the relevant framework for this discussion.

Third, we must bear in mind the complex and multifaceted nature of this issue…”

Original

United Nations. Convention on Conventional Weapons Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS)

The CCW Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) will take place from 13 to 16 May 2014 at the United Nations in Geneva.

At the 2013 CCW Meeting of High Contracting Parties, a new mandate on lethal autonomous weapons systems (LAWS) was agreed on. The mandate states:

“…Chairperson will convene in 2014 a four-day informal Meeting of Experts, from 13 to 16 May 2014, to discuss the questions related to emerging technologies in the area of lethal autonomous weapons systems, in the context of the objectives and purposes of the Convention. He will, under his own responsibility, submit a report to the 2014 Meeting of the High Contracting Parties to the Convention, objectively reflecting the discussions held.”

The Meeting of Experts will be chaired by Ambassador Jean-Hugues Simon- Michel of France. The formal letter of invitation to the Meeting of Experts on LAWS is available in both English and French.

The Provisional Agenda for the Meeting of Experts is now available.

Guidance on registering for the Meeting of Experts is attached here.

Meeting of Experts – Debate on the pros and cons of lethal autonomous weapons systems, Tuesday 13 May, afternoon session, Conference Room XIX

The Meeting of Experts will include a debate between two leading robotics experts – Professor Ronald Arkin and Professor Noel Sharkey.
In preparation for this debate, the following articles by Professor Arkin and Sharkey are available:
Arkin, Ronald – Lethal Autonomous Systems and the Plight of the Non-combatant
Sharkey, Noel – The evitability of autonomous robots warfare Sharkey, Noel – Towards a principle for the human supervisory control of robot weapons

 

The Campaign to Stop Killer Robots
Potentially LAWS could identify and attack a target without human intervention. This issue was first brought to the international community’s attention by Human Rights Watch in its report titled “Losing Humanity: The Case Against Killer Robots. The Campaign to Stop Killer Robots as part of its advocacy on LAWS produced this short film explaining the background to LAWS and work being undertaken within the United Nations and civil society. The Campaign to Stop Killer Robots will be attending the Meeting of Experts. Attached here is a list of their experts. Also during the Meeting of Experts the Campaign will be hosting four side events.

 

Background information on LAWS
LAWS is a very new issue. Below are articles on LAWS that may be useful to delegations:

 

American Society of International Law – Panel on Autonomous Weaponry and Armed Conflict

 

 

 

 

Marchant, Gary; Allenby, Braden; Arkin, Ronald; Barrett, Edward; Borenstein, Jason; Gaudet, Lyn; Kitterie, Orde; Lin, Patrick; Lucas, George; O’Meara, Richard; and Silberman, Jared – International Governance of Autonomous Military Robots (from The Colombia, Science and Technology Law Review)

 

Marsh, Nicholas (2014) Defining the Scope of Autonomy,

 

PRIO Policy Brief, 2. Oslo: PRIO. Lin, Ptrick; Bekey, George; and Abney, Keith – Autonomous Military Robotics: Risk, Ethics, and Design

 

 

 

Canada. Campaign to Stop Killer Robots

 

Co-Founder and Vice-Chair of the International Committee for Robot Arms Control Peter Asaro, left, and Canada Research Chair for Ethics, Law and Technology Ian Kerr listen to a question T during a news conference addressing so-called “killer robots” on Tuesday in Ottawa. Several international groups have urged Canada to take the lead in an international ban on autonomous weapons.

OTTAWA—Canada is being urged to lead a new international effort to ban so-called “killer robots” — the new generation of deadly high-tech equipment that can select and fire on targets without human help.

The Campaign to Stop Killer Robots is pushing for a new international treaty to ban such weapons from the battlefields of the future.

Paul Hannon, head of Mines Action Canada, said the development of such autonomous weapons — primitive versions of Hollywood’s Terminator — signals a profound change in the very nature of warfare.

Hannon’s organization is one of nine international groups calling on Canada to take the lead in the banning of the weapons, as it did in the campaign against landmines in the 1990s.

“It was not that long ago that the world considered the landmine to be the perfect soldier. It is now banned because of the humanitarian harm it has created,” Hannon said Tuesday on Parliament Hill.

“Canada led the movement to ban that weapon; it is one of the most successful international treaties of our era.”

It would be far better to squelch the development of the weapons before they are actually built and deployed, Hannon said, noting that once the weaponization genie is out of the bottle, it is much harder to put it back.

Hannon also said his group isn’t opposed to the use of robotics by the military for non-combat uses such as transportation.

The coalition has no evidence to indicate any Canadian companies are working on such weaponry, and the Department of National Defence has provided assurances it hasn’t contracted any research on the subject.

“That doesn’t mean there aren’t, because there’s not a lot of transparency on this,” Hannon said.

Six countries are known to be working on the technology: the United States, Britain, Israel, China, Russia and South Korea.

Autonomous weapons don’t actually exist yet, but with the rapid advancements being made in robotics, there are troubling signs, said Peter Asaro, co-founder and of the New York-based International Committee for Robot Arms Control.

He cited the 2010 “flash crash” on New York Stock Exchange that was ignited by a frenzy of computerized trading that drove down the stock prices of major companies.

At least two companies have created prototypes of unmanned combat aircraft that are deemed to be autonomous. Another company has a partially autonomous tracking and machine-gun system on the border between North and South Korea.

Mary Wareham, a Washington-based arms expert with Human Rights Watch, said one of the aircraft makers, BAE Systems, sponsored a recent two-day symposium on the weapons in London.

“I think they realize that if they don’t show interest and at least agree to a be a bit more transparent about the systems that are being developed, then that will increase suspicion, so it’s in their interest to be transparent,” Wareham said.

Ian Kerr, a law and ethics professor at the University of Ottawa, said removing humans from the decision to kill people poses a serious moral and philosophical problem.

Weapons of the future

Critics are calling for a pre-emptive strike on so-called “killer robots” — forthcoming autonomous weapons systems that will be able to find, select and fire on a target without the intervention of human beings.

Here are three weapons that currently exist that are considered precursors to autonomous systems:

The Samsung SGR-A1 sentry gun. Currently deployed along the demilitarized zone between North and South Korea, the SGR-A1 is considered partially autonomous and is capable of tracking multiple moving targets.

The Taranis unmanned combat air vehicle by BAE Systems. Currently only a demonstration model, the Taranis can fly intercontinental missions and is considered autonomous.

The X-47B unmanned air combat vehicle by Northrop Grumman. Currently only a demonstration model, the X-47B can also fly intercontinental missions and is deemed autonomous.

Countries known to be developing and testing autonomous weapons: the United States, Britain, Israel, China, Russia, South Korea.

Some 272 scientists in 37 countries are calling for a ban on the development and deployment of fully autonomous weapons.

Groups and agencies allied against autonomous weapons: Human Rights Watch, Article 36, Association for Aid and Relief Japan, International Committee for Robot Arms Control, Mines Action Canada, Nobel Women’s Initiative, PAX (formerly known as IKV Pax Christi), Pugwash Conferences on Science & World Affairs, Women’s International League for Peace and Freedom

Source: Campaign to Stop Killer Robots

Source: The Star

 

ICRA: Compliance Measures for an Autonomous Weapons Convention

ICRAC (International Committee  for Robot Arms Control) launches, may 31,  its new series of working papers. In ICRAC Working Paper #2 (#1 is to follow suit in the near future), Mark Gubrud and Juergen Altmann present “Compliance Measures for an Autonomous Weapons Convention”, inter alia containing a first conceptual sketch about how to implement technical verification measures to ensure human control and…

ICRAC is a Non Governmental Organisation (NGO). We are an international committee of experts in robotics technology, robot ethics, international relations, international security, arms control, international humanitarian law, human rights law, and public campaigns, concerned about the pressing dangers that military robots pose to peace and international security and to civilians in war. ICRAC members invite you to join us in calling upon the international community to urgently commence discussions about an arms control regime to reduce the threat to humanity posed by these systems. ICRAC was founded, and its founding Mission Statement adopted, in September 2009.  ICRAC’s first international Expert Workshop was held in Berlin in October 2010, at which the Berlin Statement was adopted by a majority vote of the participants.

Compliance Measures for an Autonomous Weapons Convention

Agreements to limit or prohibit certain types of arms – either in the context of arms control or of international humanitarian law4 – always raise the concern that a party that violates the terms may gain an advantage, in armed conflict, over one that does not. Therefore many such agreements include measures for promoting, implementing and verifying compliance.

The types and extent of compliance measures may depend on many factors, including the military significance of the controlled weapons or actions, the difficulty of distinguishing systems and activities that are prohibited from those that are allowed, preexisting norms and levels of transparency, and the costs and acceptability of various measures. In the history of international arms limitations, the compliance measures agreed upon have ranged from leaving each state to monitor its own and others’ compliance independently, to establishing international organizations with sophisticated technical inspection and monitoring systems.

Several arms control and international humanitarian law agreements and obligations lack any compliance measures, yet are regularly respected by states. Examples include the bans on “dumdum” bullets, x-ray invisible fragments, and blinding lasers, as well as many other rules and principles of international humanitarian law, embodied in the Geneva Conventions, their Additional Protocols, and other documents, which govern both permissible weapons and conduct in war. Some of these have gained the status of customary international law, and hence are incumbent even upon states that have not formally acceded to them; rules have been established in customary IHL for promoting compliance and prosecuting war crimes.

Other agreements, such as those banning anti-personnel landmines and cluster munitions, set forth their own provisions for inquiry and investigation of suspected or alleged noncompliance. In addition, these agreements require state parties to enact their own national implementing measures which set penalties for banned activities, to report the numbers, type and status of banned weapons they are in the process of eliminating, and to participate in consultations and review conferences. These and similar measures set standards of implementation, promote transparency and build confidence, and make noncompliance more difficult to conceal. Non-governmental organizations (NGOs) can also help; in particular, the Landmine and Cluster Munitions Monitor (LCMM) plays as strong role as the de facto independent and respected verification mechanism of the treaty.

A higher level of verification is provided by official monitoring of declared facilities and weapon systems to ensure that their characteristics and uses fall within prescribed limits. Such measures, for a multilateral treaty, are typically implemented by a treaty implementing organization (TIO).10 Technical measures include tamper-proof monitoring and tagging

Considerations for Autonomous Weapons

The past decade has witnessed the advent and rapid growth in the development and use, especially by the United States, of weaponized “drones” and, more generally, air, land and water vehicles, large and small, that carry arms and have no on-board crew. A complete prohibition of all such uninhabited armed vehicles would be straightforward to verify through on-site inspections of military sites and other forms of monitoring. Most such vehicles would lack any accommodation for human crew and so would be easily distinguished from piloted and crewed vehicles.

A treaty that prohibits autonomous fire decision but allows remotely controlled and “semiautonomous” weapons presents a more complex set of challenges. If a “semi-autonomous weapon system” may have capabilities to autonomously acquire, track, identify, group and prioritize targets, and to control their engagement once a “go” signal is given,11 conversion to full lethal autonomy could be as simple as throwing a (software) switch. Given continued trends in technology, the addition of such capabilities to remotely controlled armed vehicles already equipped with sophisticated sensors and general purpose computers might also reduce to a matter of installing new software. Given the potentially high military importance of some kinds of fully autonomous weapons, especially those designed to attack major weapon systems (perhaps in swarms), there would be a significant risk of fully autonomous options being secretly prepared for systems officially declared to be under human control.

However, militarily potent fully autonomous weapons systems will likely require extensive development and testing while being operated under full autonomous control (though perhaps under human supervision). It would be difficult to conceal the large-scale activities that would be involved in such programs, especially if they are made clear violations of accepted norms and of a binding treaty.

By starting with a declaratory undertaking to forgo the development, testing, production and use of fully autonomous weapons, the international community would establish a normative goal and buy time to avoid a runaway arms race. As our understanding of the forms and capabilities of possible autonomous weapons deepens, more detailed limits may be established and clarified, with particular attention to blocking the development and deployment of those systems which pose the greatest threats. Provisions for such further clarifications, and a process for making them, should be incorporated in the treaty.

Since verification of the non-existence of an autonomous option in software is virtually impossible, and would be deemed far too intrusive, a tamper-proof system will be needed that can verify, after the fact, that an attack in question was under direct control of a human being (“in the loop,” not “on the loop”). This could be achieved by keeping the records of each engagement and making the records of specific engagements available to a Treaty Implementing Organization, on request, when sufficient evidence exists to support suspicions of illegal autonomous operation.

Certain strictly defensive systems, where human safety is at stake and where human reactions are too slow for an effective response, may be exempted from the prohibition, provided they are operated under human supervision. Cases which meet these criteria may include missile and artillery interception systems which defend human-inhabited vehicles or locations. A strict criterion of necessity should be applied; in cases where human reaction is possible, the system should delay engagement to allow a human decision until imperative safety reasons compel an automatic response. In no case should autonomous engagement of human targets be permitted. Such allowances will complicate the terms of an agreement, but if they are narrowly restricted and clearly defined they do not pose particularly difficult challenges for verification.

Specific Proposals

Given the challenges in drawing a clear line across a complicated space of possibilities, and of holding that line when it is easily crossed and there are potential military advantages from doing so, prohibition of autonomous weapons requires a strong set of compliance measures.

Perhaps the most fundamental is global recognition of the dangers of an open-ended robot arms race, and, responding to this, state commitment to forgoing autonomous weapons, and to establishing and sustaining a regime of preventive arms control. Entwined with this is the establishment of an unequivocal, universal norm demanding a human decision for each single use of violent force, and the implementation of measures to verify human control and to enforce accountability in each instance.

As a philosophical and legal foundation, the principles that the use of violent force must always be under human control, that decision in the use of force is a human responsibility, and that it is a human right not to be subjected to violent force or coercion on the decision of a machine, should be asserted as primary, and added to the canons of just war theory, ethics and international humanitarian law, especially as taught to military officers and personnel. Together with specific legal terms of prohibition and its implementation, these can be embodied in an Autonomous Weapons Convention (AWC).

The central obligations of state parties to an AWC will be: not to develop, test, produce, stockpile, deploy, transfer, broker transfer, or use weapon systems capable of autonomous target selection and engagement; not to permit autonomous target selection and weapons engagement by any machines under its jurisdiction; and to ensure that for each use of force against any target by means of any robotic weapon under its jurisdiction or control (whether lethal or nonlethal), the selection of the target, and decision to engage, are made by a human being who is responsible and accountable for that decision.

National implementing legislation should prohibit and impose penalties for any activitiescontrary to these obligations, and make it the responsibility of soldiers and citizens to refuse participation in and to report violations. State parties should be required to declare any preexisting weapon systems that will be destroyed and programs that will be terminated when treaty comes into force. There should be provisions for consultations and procedures for requesting consultations in case of compliance issues arising. A treaty implementing organization (TIO) should be established to facilitate consultations, implement technical safeguards, and conduct inquiries and investigations when so mandated. It should also be charged to develop a body of technical expertise on autonomous weapons and verification of their non-use. An NGO body like the LCMM should also independently monitor compliance and address gaps in national and TIO monitoring and reporting.

ICRAC scientists are developing proposals for technical safeguards which could verify that a responsible human operator has selected each target and initiated each engagement of a weapon system, under the authority of a responsible commander (which might be the same person), based on human, not machine judgment. Some initial ideas are presented below.

A compliance model based on transparency and confidence-building measures, inspections, technical safeguards, and forensic investigation of suspicious incidents, together with verification of human control and enforcement of accountability in the use of violent force, particularly by means of remotely-operated weapons and uninhabited vehicles, is sufficient for effective verification of a ban on fully autonomous weapons designed to engage personnel and nonstrategic military targets.

For issues of strategic concern, stronger and more specific measures may need to be developed, nationally and through the TIO, and could be added to the treaty regime as protocols or amendments. National technical means of verification will also be important
resources.

Definitions

Careful and explicit definitions will need to be given for each of the terms used; for example, “autonomous” is generally understood, in this context, to mean functioning independently of human action, though possibly under human supervision and with the possibility of human intervention. Here a distinction must be made with the word “automatic.” The general sense is that “autonomous” implies a higher level of complexity in a system’s ability to collect and process relevant information and in the relationship between that information and behavior; in other words, a higher level of (artificial) intelligence. It is possible to give a technical definition of “autonomy” in this sense which permits us to distinguish “autonomous” from “automatic” quantitatively, on the criterion of a measure of complexity.

As an alternative, it may be sufficient to define an “autonomous weapon” (AW), as any system that acts independently of human action in “engagement-related functions” such as the acquisition, tracking, identification, grouping, selection, prioritization and engagement of targets.14 Each step in this so-called “kill chain” or “loop” involves functions which the weapon system might fulfill autonomously. If any of these functions are autonomous, the weapon system may be classified as an AW, and if all of them are autonomous, the system is a fully autonomous weapon (FAW).

Under this paradigm, the treaty definition may simply exclude certain very simple systems, to be considered as merely “automatic” and not as AW. These exclusions, such as proximity fuses, mines, and heat-seeking missiles, can be enumerated and described in detail, either as an exhaustive list or as a set of typical examples. General technical criteria can also be given, including weapons type and complexity.

In addition, definitions will need to be given for those high-complexity FAW which are to be permitted as exceptions, principally those systems which are purely defensive against incoming projectiles which must be engaged in a time too short for human decision and response. The conditions under which such systems are permissible need to be spelled out; potential requirements include that they must be defending a human-inhabited location or vehicle, that they must be operated under accountable human supervision, and that to the greatest extent feasible they must give the human operator adequate information and maximum time and opportunity to abort or intervene in an erroneous engagement.

Standards

A problem related to definitions is the setting of criteria for human control and responsibility in the decision to use violent force. The difficulty and importance of this is indicated by the language of the US Department of Defense’s Directive on Autonomy in Weapon Systems, which refers repeatedly to “selection” of targets by a “human operator” as the crucial step that distinguishes “semi-autonomous weapons” (SAW) from fully autonomous weapons; a SAW may “cue” its operator to “potential targets,” but the operator must “select” them. Yet the definition offered for “target selection” – “The determination that an individual target or a specific group of targets is to be engaged.” – fails to clarify what this means in practice.

Does the operator need to move a cursor over the potential target’s image, or if there is only one potential “target group” in play, can the operator just say “Go”? If the operator is using some type of brain-computer interface, can “determination” be as little as a conscious decision?

We believe that in order for any level of “autonomy in engagement-related functions” to be acceptable under an AWC, clear requirements must be stated and met. Each engagement decision must be taken under the authority of an accountable commander, and the weapon system itself must be under the control of an accountable operator (who may be the same person). The commander must have sufficient information, without relying on machine assessment, target recognition or preprocessing of raw data, to distinguish combatants from noncombatants, to determine that the military objectives outweigh harm or risks to noncombatants and civilian objects, and to respect all other applicable rules of international humanitarian law. If the system, and other resources, do not provide sufficient information to make these determinations, the commander’s obligation is to hold fire. The operator must have positive control of target selection and engagement, so that unintended engagements are nearly impossible. If the system does not provide such positive control, the operator’s obligation is to refuse use of the system. Neither the commander nor the operator may evade responsibility as a result of technical limitations of the system.

Additionally, the AWC may set forth standards for the operator’s interface. An unmistakable, undeniable, affirmative action of the operator may be required both for “selection” when there is any degree of ambiguity, such as when multiple “potential targets” or “target groups” are indicated, and again to initiate engagement. Some kind of “handshaking” between the operator and system may be required for confirmation. Control by braincomputer interface may be prohibited.

Technical Safeguards and Verification of Human Control and Responsibility If remotely-operated weapons (ROW), including armed uninhabited vehicles, and semiautonomous weapons as described by the US Department of Defense, will continue to be used and permitted under an AWC, in order to hold the line and prevent its crossing into prohibited fully autonomous weapons, technical safeguards and verification measures should be implemented to verify that each engagement of a weapon falling into one of these categories, as well as the operation of permitted FAW for terminal defense, is carried out under the authority of an accountable commander and control of an accountable operator.

ICRAC scientists have begun work toward proposing such technical measures. ICRAC’s assumption is that state parties to the AWC will be willing to accept on-site inspections, sharing of some data, requirements for more extensive private data recording and preservation, and the installation of monitoring and reporting devices with known, open-source functions, provided that the information revealed by such procedures is strictly circumscribed and costs are not excessive. The benefit to participating state parties is to provide evidence of their compliance and thereby promote the compliance of other states as well as refuting spurious allegations of noncompliance.

Proving that the command to select and to engage a particular target was the action of a particular person is difficult, but an evidence trail that such a command was given can be generated and made difficult to forge or tamper with. Such evidence could include video and other data that was presented to the operator, plus a video view that includes the
operator’s hands on controls and the operator’s view of the console. The data record would also include the commands as received by the console and codes for the identities of the accountable operator and accountable commander, which might be encrypted in physical keys which they are personally accountable for controlling at all times, and which are
needed in order to operate the weapon.

A time slice of the data stream immediately prior to and including the selection and engagement commands could be designated as the primary record of the engagement. This record would be held by the state party, but a cryptographic code called a “hash” of the record would be recorded by a “glass box” (not “black” because its hardware and software
would be known and open) together with a time stamp of the moment the engagement command was issued. The hash would serve as a digital seal of the engagement record; if even a single bit of the record were later altered, the hash would not match. The hash and the time stamp, recorded together, could be referred to as a “use of force identifier” (UFI).
The UFIs would be periodically downloaded during on-site inspections by the TIO, which would also verify that the glass boxes were functional and properly installed. The UFIs would be held in a repository by the TIO. While the TIO would make every effort to ensure security of the UFI database, its compromise would not reveal any useful intelligence, but only strings of gibberish.

To strengthen the evidence trail, glass boxes could also be installed at the receiving end, on armed uninhabited vehicles and other ROW. All ROW would need to be registered with the TIO, and the glass boxes would need to be periodically inspected and their data downloaded.

The glass box on the weapon would be capable of detecting the launch of a missile, firing of a gun, or other engagement action of the weapon, either independently, with a signal provided by the weapon, or both. It would record the time of the event, plus a hash of data generated by the weapon system, which the system would retain until downloaded to custody of the state party. The UFI would also be transmitted, through the weapon system’s communications links, from the glass box on the console to the glass box on the weapon, immediately following the engagement, and would be recorded by the glass box on the weapon. The time stamp of the UFI’s issuance at the console would have to be prior to the time stamp recorded for the weapon firing, in order for the firing to have been caused by a command from the safeguarded console. The presence of the UFI in the glass box on the weapon would also show that the particular weapon was in communication with the particular safeguarded console at the time of the engagement.

This conceptual sketch is intended as representative of initial thinking about technical verification measures for an AWC, not the final word. The basic approach, though, seems plausible. Tactical information about engagements and technical details of weapon system hardware and software would not be disclosed, but the UFI hash codes would serve to prevent tampering with the records kept by the state party. In the event of a question about whether the weapon involved in a particular use of force was operating autonomously or under accountable human control, the state operating the weapon could be asked to produce the records which it kept of that use of force, perhaps in an encrypted form but tamper protected by the hash code held by the TIO. The state party could then selectively reveal verified details of the use of force event to an orderly process of inquiry conducted by the TIO.

Original