BILL READ FRAeS looks at how the potential future development of autonomous weapons systems without human control is raising concerns over the ethical and legal implications of their development and use.

Northrop Grumman’s X-47B demonstrator (USN)

Ever since the first Predator remotely piloted UAV was used to launch an attack in Afghanistan in 2002, there has been a fierce debate over the morals of using of weapons which are controlled from hundreds or even thousands of miles away. The argument is that using drones allows counties to kill people at long distance without risk to themselves and thus lowers the human cost of aggression - an argument that some commentators point out has been going since The Illiad when the ancient Greeks criticised the Trojans for using bows and arrows.

Up to now, unmanned military systems have always retained a ‘human in the loop’ who must follow certain rules and use their judgement and training to make the final decision over whether to launch weapons. However, military systems are constantly evolving. The current generation of military drones are vulnerable to air defence systems and are most effective in asymmetric war situations where there is a low threat of them being shot down. New military UAVs are being developed which are smarter, faster and have greater stealth capabilities.

However, making decisions takes time.  Dispensing with a human operator and enabling an unmanned system to make its own decisions would enable a military force to be ‘quicker on the draw’. Such weapons are now possible, as artificial intelligence (AI) technologies have reached a point where is feasible to create fully autonomous systems which can replicate the human ability to process different sources of data and use them to make its own decisions - a development that has been described as ‘the third revolution in warfare after gunpowder and nuclear arms’.

Killer drones?

MQ-9 Reaper. (USAF)

Questions are being raised over how such future autonomous systems would replicate the human decision process. What rules would they follow? Could they be ‘taught’ to make ethical decisions? Who would be responsible for their actions? The Center for a New American Security (CNAS) has launched a project to examine the legal, moral, ethical, policy and strategic stability dimensions of increased autonomy in future (https://s3.amazonaws.com/files.cnas.org/documents/Ethical-Autonomy-Working-Paper_021015_v02.pdf). The issue of autonomous weapons has also been debated by the United Nations Convention on Certain Conventional Weapons (CCW). This year, the UN said that it is ‘closely following developments related to the prospect of weapons systems that can autonomously select and engage targets, with concern that technological developments may outpace normative deliberations.’

A more vociferous protest group are a group of 54 non-governmental organisations which have got together to promote the ‘Campaign to Stop Killer Robots’ (https://www.stopkillerrobots.org/) which says that development of autonomous weapon systems would pose: ‘a fundamental challenge to the protection of civilians and to compliance with international human rights and humanitarian law’. The Campaign also argues that: ‘Allowing life or death decisions to be made by machines crosses a fundamental moral line. Autonomous robots would lack human judgment and the ability to understand context. These qualities are necessary to make complex ethical choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack.  As a result, fully autonomous weapons would not meet the requirements of the laws of war… Agreement is needed now to establish controls on these weapons before investments, technological momentum, and new military doctrine make it difficult to change course.’

 

What is an autonomous system?

Some autonomous weapons are already in operation. Used as a last line of defence system again anti-ship missiles, the Phalanx close-in weapons system has an automated fire-control system which can detect and destroy targets automatically. (USN)

There is currently no standard definition of exactly what constitutes an autonomous weapon. Autonomy can be used to define the command-and-control relationship between human and machine, how complex the machine is or the type of decision being automated. Focusing on the first definition, autonomous weapons can be divided into three basic types defined by the relationship between the machine and a human controller:

(a)          Semiautonomous (human in the loop) - weapons that perform certain functions and then stop for wait for human input;

(b)          Human-supervised autonomous (human on the loop) - weapons that can perform a function on their own but are monitored by humans and can be overridden if the machine malfunctions; and

(c)           Fully autonomous (human out of the loop) - Weapons that can operate on their own with humans unable to intervene

Human on the loop supervised systems are used to defend human-occupied targets against incoming threats that meet certain criteria, such as air and missile defence systems. Human controllers are aware of the targets being engaged but do not have to give permission to engage specific targets. Human controllers can halt the weapon system either electronically or through hardware-level overrides in the event of a software malfunction or cyber attack.

A swarm of Perdix micro-UAVs released from US fighters in a test in January 2017. (USAF)

One example of a current ‘human out of the loop’ weapon are loitering munitions that are launched into a general area where they look for targets within a general class, such as radars, ships or tanks, and then engage them without human intervention. There have also been tests using swarms of drones designed to overcome enemy defences with sheer numbers which are too numerous to be each controlled by individuals and have to operate autonomously following general rules. 

Systems and decision making

Speaking at a recent RAeS lecture (Future Design Drivers for Autonomous Systems Technology, 19 July 2017), Keith Rigby, Principal Technologist – Weapons Systems Integration at BAE Systems, explained how the current generation of an armed UAVs required two elements to operate - systems and decision making. In addition to the basic requirement of being able to fly over certain distances for certain lengths of time, the platform needs to be equipped with systems which enable it to speedily provide its operator with accurate information to enable them to make informed decisions. These include sensors, communication links and speed of processing. “All these system components are provided by different organisations and need to be integrated to make them work together,” said Rigby.  “However, the overall system performance is dictated by a sum of its parts and a system is only as good as the weakest link in the chain. Sensors are limited by the laws of physics and may operate differently depending on different weather conditions and whether it is night of day. The speed of data processing is limited by current technology, communication links are limited by available and usable bandwidth and satellite communications can drop out.” 

Following the rules

BAE Systems’ Taranis UCAV demonstrator was unarmed. (BAE Systems)

However, even once all the information has been integrated and sent back, it still needs to be assessed and interpreted by the UAV operator. This is not an easy task. “How good is a remote operator’s situational awareness?” asked Rigby. “Can they distinguish a military target from a civilian building, such as a school or a hospital?

Human controllers of armed UAVs also have to ensure that the weapon complies with the laws of armed conflict (LOAC). A part of international law created under the 1949 Geneva Conventions, the LOAC regulates the conduct of armed hostilities, as well as protecting civilians, prisoners of war, the wounded, sick, and shipwrecked (https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/27874/JSP3832004Edition.pdf). Signatory governments are required to design a program that ensures that the laws are observed, violations are prevented and reported if they do happen and that all forces are trained in LOAC. It also includes a requirement for a legal review of new weapons to that ensure military personnel do not use any that violate international law, such as poison weapons and expanding hollow point bullets.

The LOAC has three principles governing armed conflict:

  1. Military necessity - Combat forces should only engage in those acts necessary to accomplish a legitimate military objective.
  2. Distinction - requires that combatants only engage valid military targets and discriminate between lawful combatant targets and noncombatant targets, such as civilians, civilian property, POWs, and wounded personnel out of combat.
  3. Proportionality - prohibits the use of force which exceeds that needed to accomplish the military objective (“You can’t flatten a city to get one person”, said Rigby)

In addition, an autonomous weapons system without human control would have to follow the Rules of engagement (ROE) which, in military doctrines, provide authorisation for and limits on the use of force, positioning and posturing of forces and deployment of certain specific capabilities. 

Who is responsible?

Reaper operation station. Could an autonomous system take the same decisions on its own? (USAF)

Up to now, engineering teams working on a UAV or other weapons systems have only been responsible for creating the platforms and systems and left its actual operation to the military. However, with an autonomous system, engineers would also have to create the way that the machine thinks.

This development would have important implications regarding liability. Keith Rigby said that the ongoing development of ‘smart’ precision weapons had led to the expectation that, if you deployed a weapon, then it will hit a valid target. “People expect perfection whether or not you can achieve it.”

If an autonomous weapon did cause non-military casualties - who would be to blame? Would it be the machine itself, the government or the armed forces that operated it, the manufacturer or the engineers that designed and built it? Rigby explained how engineering teams would have to programme the autonomous system to follow the same ethical and legal requirements observed by humans. He considered that engineers working on autonomous systems would have to confirm to the Statement of Ethical Principles for the Engineering Profession which state that:

-              Professional engineers and technicians should give due weight to all relevant law facts, published guidance and the wider public interest

-              They should ensure all work is lawful and justified

-              They should minimise and justify any adverse effect on society or on the natural environment for their own and succeeding generations

-              They should hold paramount the health and safety of others, act honourably, responsibly and lawfully and uphold the reputation, standing and dignity of the engineering profession. 

Who’s rules?

In 2012 RAF 13 Squadron became the first UK-based Reaper operator. (RAF)

But what rules should engineers follow? Currently, the only countries to have defined rules relating to the use of autonomous military systems are the UK and the USA. The US Department of Defense has a directive (Directive 3000.09) which sets out guidelines over policy and responsibility regarding the development of autonomous weapon systems and minimising the probability of ‘unintended engagements’ (http://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/300009p.pdf). The directive states that: ‘Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.’ It also includes the requirement that semi-autonomous weapon systems must be designed to avoid the risk of engaging targets not been previously selected by an ‘authorised human operator’.

The policy of the UK is that the ‘autonomous release of weapons’ will not be permitted and that ‘…operation of weapon systems will always be under human control.’ In March 2014, the UK Government published a report on a House of Commons Defence Committee’s inquiry into ‘Remote Control: Remotely Piloted Air Systems—current and future UK use’ (https://publications.parliament.uk/pa/cm201415/cmselect/cmdfence/611/611.pdf) Research for the inquiry included a visit to RAF XIII Squadron which has operated Reaper UCAVs in combat situations over Afghanistan. The report stated that Reaper aircrews ‘exhibited a strong sense of connection to the life and death decisions they are sometimes required to take’ and only operated remotely piloted air systems (RPAS) in accordance with UK rules of engagement.

However, other States have either not yet developed such policies or not discussed them openly.

Programming in ethics

IAI’s Harop loitering munition can operate with a man in the loop or fully autonomously. (IAI)

Rigby admitted that developing an autonomous system to follow these rules would not be an easy task. “Putting these drivers together to define an autonomy function is very complex,” he admitted.  “The competence of a UAV operator is influenced by their training, experience and individual knowledge - all factors beyond the remit of the engineer who designed the original systems. How can you validate a weapon to cover every possible situation? How do you put rules of engagement into a UAV system when the ROE may not even stay the same during the course of a mission? How can a robot replicate such qualities as ethics, courage, self discipline or harm limitation? We can’t yet do this today but we may need to in the future.”

This conclusion was shared by an expert meeting organised by The International Committee of the Red Cross in March 2014, (https://www.icrc.org/en/.../4221-002-autonomous-weapons-systems-full-report.pdf), which concluded that: ‘programming a machine to undertake the qualitative judgements required to apply the IHL (international humanitarian law) rules of distinction, proportionality and precautions in attack, particularly in complex and dynamic conflict environments, would be extremely challenging. The development of software capable of carrying out such qualitative judgements is not possible with current technology and is ‘unlikely to be possible in the foreseeable future’.

However, there might be a way that such programming could be done. Rigby went on to say that how developing ‘hard code’ to programme legal and ethical attributes into an autonomous weapons system was probably impossible and the only way to achieve it would be to have a system in which the system learned and developed as it went along, a future development that he described as ‘scary’. 

Are autonomous systems inevitable?

NASA X-45A UCAV demonstrator. (NASA)

Rigby concluded that: “There are two schools of thought on the future development on autonomous weapon systems. You can try to ban them or you can continue to develop them. If such autonomous systems are created, then the engineers and manufacturers involved in their development must take account not only the military requirements of the user but also the wider issues of the legal and ethical frameworks that relate to how they might be used.”

However, there are concerns that, even if the major military powers were persuaded not to deploy autonomous weapons or they were developed carefully to make a perfect system that would only work on legitimate military targets, they might still be used by rogue operators unconcerned with legal or ethical niceties. “While we try to make system perfect, an aggressor may develop a system which beats us to the military objective,” one participant at the RAeS conference commented. “While engineers take the moral high ground, there may be other programmers out there with different ideas.” 

Conclusion

Israeli Aerospace Industries (IAI) ROTEML multi-rotor loitering munition for ground forces carries a 1kg double grenade warhead and can hover until deployed by a human operator. (IAI)

The final word comes from a quote from an open letter published in 2015 from a group of AI and robotics researcher (https://futureoflife.org/open-letter-autonomous-weapons/), whose signatories included Prof Stephen Hawking, Elon Musk, and Steve Wozniak.  ‘Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.’

 

 

 

Bill Read
25 July 2017

Comment title