Presentation is loading. Please wait.

Presentation is loading. Please wait.

AFOSR – January 2012 Who Do You Trust? Inappropriate Trust and War Crimes: A Case for Ethical Autonomous Systems Ronald C. Arkin Mobile Robot Laboratory.

Similar presentations


Presentation on theme: "AFOSR – January 2012 Who Do You Trust? Inappropriate Trust and War Crimes: A Case for Ethical Autonomous Systems Ronald C. Arkin Mobile Robot Laboratory."— Presentation transcript:

1 AFOSR – January 2012 Who Do You Trust? Inappropriate Trust and War Crimes: A Case for Ethical Autonomous Systems Ronald C. Arkin Mobile Robot Laboratory Georgia Institute of Technology

2 AFOSR – January 2012 The Question of Trust Serious Secondary Consequences in the Use of Lethal Force Infractions of International Humanitarian Law resulting in illegal deaths of non-combatants War crime charges Political fallout Effect of morale on troops Hostility among local population Citizen reticence towards mission accomplishment Can technology be used to reduce the likelihood of such events and document criminal acts should they occur? Ongoing International discourse on the subject vis-à-vis lethal autonomous system

3 AFOSR – January 2012 Current Motivators for Military Robotics Force Multiplication l Reduce # of soldiers needed Expand the Battlespace l Conduct combat over larger areas Extend the warfighter’s reach l Allow individual soldiers to strike further Reduce Friendly Casualties The use of AI & robotics for reducing ethical infractions in the military does not yet appear anywhere

4 AFOSR – January 2012 Robots for the Battlefield South Korean robot provides either an autonomous lethal or non-lethal response with an automatic mode capable of making the decision on its own. iRobot provides Packbots capable of tasering enemy combatants; also some equipped with the highly lethal MetalStorm system. SWORDS platform is in Iraq and Afghanistan and can carry lethal weaponry (M240 or M249 machine guns, or a.50 Caliber rifle). New MAARS version in development. Israel has considered deploying stationary robotic gun-sensor platforms along the Gaza border in automated kill zones, with machine guns and armored folding shields. The U.S. Air Force the hunter-killer UAV, the MQ-9 Reaper, successor to the Predator and widely used in Afghanistan. China is developing the “Invisible Sword”, a deep strike armed stealth UAV. Many other examples both domestically and internationally.

5 AFOSR – January 2012 Force-bearing U.S. Unmanned Systems Source: FY Unmanned Systems Integrated Roadmap, Department of Defense

6 AFOSR – January 2012 “For a significant period into the future, the decision to pull the trigger or launch a missile …, but it will remain under the full control of a human operator. Many aspects of the firing sequence will be fully automated but the decision to fire will not likely be fully automated until legal, rules of engagement, and safety concerns have all been thoroughly examined and resolved.“

7 AFOSR – January 2012 “Authorizing a machine to make lethal combat decisions is contingent upon political and military leaders resolving legal and ethical questions. … Ethical discussions and policy decisions must take place in the near term … rather than allowing the development to take its own path apart from this critical guidance.“

8 AFOSR – January 2012

9 It is already deployed in the battlespace: Cruise Missiles, Navy Phalanx (Aegis-class Cruisers), Patriot missile, fire-and-forget systems, even land mines by some definitions. Will there always be a human in the loop? “Human on the loop” (Air Force) “Leader in the Loop” (Army) Increasing tempo of warfare forces it upon us Fallibility of human decision-making Only possible prevention is International treaty/prohibition Despite protestations to the contrary from many sides, autonomous lethality seems inevitable D. Kenyon, [DDRE 2010] Lethal Autonomy is Inevitable

10 AFOSR – January 2012 Should soldiers be robots? Isn’t that largely what they are trained to be? Should robots be soldiers? Could they be more humane than humans?

11 AFOSR – January 2012 How can we avoid this? Kent State, Ohio, Anti-war protest My Lai, Vietnam Abu Ghraib, Iraq Haditha, Iraq Afghanistan

12 AFOSR – January 2012 HUMAN FAILINGS IN THE BATTLEFIELD Surgeon General’s Office, Mental Health Advisory Team (MHAT) IV Operation Iraqi Freedom 05-07, Final Report, Nov. 17, Approximately 10% of Soldiers and Marines report mistreating non-combatants (damaged/destroyed Iraqi property when not necessary or hit/kicked a non-combatant when not necessary). Only 47% of Soldiers and 38% of Marines agreed that non-combatants should be treated with dignity and respect. Well over a third of Soldiers and Marines reported torture should be allowed, whether to save the life of a fellow Soldier or Marine or to obtain important information about insurgents. 17% of Soldiers and Marines agreed or strongly agreed that all noncombatants should be treated as insurgents. Just under 10% of soldiers and marines reported that their unit modifies the ROE to accomplish the mission. 45% of Soldiers and 60% of Marines did not agree that they would report a fellow soldier/marine if he had injured or killed an innocent noncombatant. Only 43% of Soldiers and 30% of Marines agreed they would report a unit member for unnecessarily damaging or destroying private property. Less than half of Soldiers and Marines would report a team member for an unethical behavior. A third of Marines and over a quarter of Soldiers did not agree that their NCOs and Officers made it clear not to mistreat noncombatants. Although they reported receiving ethical training, 28% of Soldiers and 31% of Marines reported facing ethical situations in which they did not know how to respond. Soldiers and Marines are more likely to report engaging in the mistreatment of Iraqi noncombatants when they are angry, and are twice as likely to engage in unethical behavior in the battlefield than when they have low levels of anger. Combat experience, particularly losing a team member, was related to an increase in ethical violations.

13 AFOSR – January 2012 Possible explanations for the persistence of war crimes by combat troops High friendly losses leading to a tendency to seek revenge. High turnover in the chain of command, leading to weakened leadership. Dehumanization of the enemy through the use of derogatory names and epithets. Poorly trained or inexperienced troops. No clearly defined enemy. Unclear orders where intent of the order may be interpreted incorrectly as unlawful. Youth and immaturity of troops Pleasure from power of killing or an overwhelming sense of frustration There is clear room for improvement and autonomous systems may help

14 AFOSR – January 2012 What can robotics offer to make these situations less likely to occur? Is it not our responsibility as scientists to look for effective ways to reduce man’s inhumanity to man through technology? Research in ethical military robotics could and should be applied toward achieving this end. How can this happen?

15 AFOSR – January 2012 Underlying Research Thesis: Robots can ultimately be more humane than human beings in military situations It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of.

16 AFOSR – January 2012 Objective: Robots that possess ethical code 1.Provided with the right of refusal for an unethical order 2.Monitor and report behavior of others 3.Incorporate existing laws of war, battlefield and military protocols l Geneva and Hague Conventions l Rules of Engagement

17 AFOSR – January 2012 Limited Circumstances for Use Specialized Missions only (Bounded morality applies) Room clearing Countersniper operations DMZ – perimeter protection Interstate Warfare Not counterinsurgency Minimize likelihood of civilian encounter (e.g., leaflets) Alongside Soldiers, not as replacement Human presence in battlefield should be maintained

18 AFOSR – January 2012 Reasons for Ethical Autonomy In the future autonomous robots may be able to perform better than humans under battlefield conditions: The ability to act conservatively: i.e., they do not need to protect themselves in cases of low certainty of target identification. The eventual development and use of a broad range of robotic sensors better equipped for battlefield observations than humans’ currently possess. They can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events. Avoidance of the human psychological problem of “scenario fulfillment” is possible, a factor believed partly contributing to the downing of an Iranian Airliner by the USS Vincennes in 1988 [Sagan 91]. They can integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time. When working in a team of combined human soldiers and autonomous systems, they have the potential capability of independently and objectively monitoring ethical behavior in the battlefield by all parties and reporting infractions that might be observed.

19 AFOSR – January 2012 Reasons Against Autonomy Responsibility – who’s to blame? Threshold of entry lower – violates jus ad bellum Risk-free warfare – unjust Can’t be done right - too hard for machines to discriminate Effect on squad cohesion Robots running amok (Sci fi) Refusing an order Issues of overrides in wrong hands Co-opting of effort by military for justification Winning hearts and minds Proliferation Mission Creep

20 AFOSR – January 2012 What to Represent The underlying principles that guide modern military conflict are: Military Necessity: may target those things which are not prohibited by LOW and whose targeting will produce a military advantage. Military Objective: persons, places, or objects that make an effective contribution to military action. Humanity or Unnecessary Suffering: must minimize unnecessary suffering incidental injury to people and collateral damage to property. Proportionality: The US Army prescribes the test of proportionality in a clearly utilitarian perspective as: “The loss of life and damage to property incidental to attacks must not be excessive in relation to the concrete and direct military advantage expected to be gained.” [US Army 56, para. 41, change 1] Discrimination or Distinction: must discriminate or distinguish between combatants and non-combatants; military objectives and protected people/protected places.

21 AFOSR – January 2012 Action-based Machine Ethics The logical relationship between these action classes: 1.If an action is permissible, then it is potentially obligatory but not forbidden 2.If an action is obligatory, it is permissible and not forbidden 3.If an action if forbidden, it is neither permissible nor obligatory Summarizing: Laws of War and Rules of Engagement determine what are absolutely forbidden lethal actions. Rules of Engagement and mission requirements determine what is obligatory lethal action, i.e., where and when the agent must exercise lethal force. Permissibility alone is inadequate.

22 AFOSR – January 2012 P permissible P l-unethical Ρ P l-ethical P lethal Ρ P l-ethical P lethal Behavioral Action Space Unethical and Permissible Actions regarding the intentional use of lethality

23 AFOSR – January 2012 Permission to Fire [{  c forbidden | c forbidden (S i ) }  c obligate | c obligate (S i  }]  PTF(S i  All forbidden constraints must be upheld At least one obligated constraint must hold If and only if permission to fire is granted { { {

24 AFOSR – January 2012 Permission to Fire (OVERRIDE (S i ) xor [{  c forbidden | c forbidden (S i ) }  c obligate | c obligate (S i  }])  PTF(S i  Operator can Override All forbidden constraints must be upheld At least one obligated constraint must hold If and only if permission to fire is granted { { { {

25 AFOSR – January 2012 Operator Overrides Negative Overrides: Denying Permission to Fire in the presence of obligating constraints Positive Overrides: Granting Permission to Fire in the presence of forbidding ethical constraints

26 AFOSR – January 2012 Principle of Double Intention Weapon Selection Firing Pattern Discrimination Target identified As legitimate combatant Proportionality Tactics to engage target Approach and stand-off distance Responsibility Human grants use of autonomous lethal force in given situation (pre-mission) Target Engaged Military Necessity Establishes criteria for targeting

27 AFOSR – January 2012 Architectural Desiderata 1.Permission to kill alone is inadequate, the mission must explicitly obligate the use of lethal force. 2.The Principle of Double Intention, which extends beyond LOW requirement for principle of Double Effect, is enforced. 3.In appropriate circumstances, tactics can be used to encourage surrender over lethal force, which is feasible due to the reduced requirement of self-preservation. 4.Strong evidence of hostility required (fired upon or clear hostile intent), not simply possession or display of weapon. Tactics can be used to determine hostile intent without premature use of lethal force (e.g., close approach, inspection). 5.For POWs, the system has no lingering anger after surrender, reprisals not possible. 6.There is never intent to target a noncombatant. 7.Proportionality may be more effectively determined given the absence of a strong requirement for self-preservation, reducing a need for overwhelming force. 8.A system request to invoke lethality, triggers an ethical evaluation. 9.Adherence to the principle of “first, do no harm”, which indicates that in the absence of certainty (as defined by τ ) the system is forbidden from acting in a lethal manner.

28 AFOSR – January 2012 Architectural Requirements Ethical Situation Requirement: Ensure that only situations S j that are governed (spanned) by C can result in ρ lethal-ij (a lethal action for that situation). Lethality cannot result in any other situations. Ethical Response Requirement: Ensure that only permissible actions ρ permissible, result in the intended response in a given situation S j (i.e., actions that either do not involve lethality or are ethical lethal actions that are constrained by C). Unethical Response Prohibition: Ensure that any unethical response ρ l-unethical-ij is either: 1. mapped onto the null action ø (inhibited from occurring if generated by the original controller). 2. transformed into an ethically acceptable action by overwriting the generating unethical response ρ l-unethical-ij, perhaps by a stereotypical non-lethal action or maneuver, or by simply eliminating the lethal component associated with it. 3. precluded from ever being generated by the controller in the first place by suitable design through the direct incorporation of C into the design of B. Obligated Lethality Requirement: In order for a lethal response ρ lethql-ij to result, there must exist at least one constraint c k derived from the ROE that obligates the use of lethality in situation S j Jus in Bello Compliance: In addition the constraints C must be designed to result in adherence to the requirements of proportionality (incorporating the principle of double intention) and combatant/noncombatant discrimination of Jus in Bello.

29 AFOSR – January 2012 Ethical Architectural Components Ethical Governor: which suppresses, restricts, or transforms any lethal behavior produced so that it must fall within the range of ethical responses. Ethical Behavioral Control: which constrains all active behaviors so that only lethal ethical behavior is produced by each individual active behavior. Ethical Adaptor: if an executed behavior is determined to have been unethical, then adapt the system to either prevent or reduce the likelihood of such a reoccurrence. Responsibility Advisor: Advises operator of responsibilities prior to Mission deployment and monitors for constraint violations during mission

30 AFOSR – January 2012 Example Scenarios “Military declined to Bomb Group of Taliban at Funeral” AP article 9/14/2006 Korean DMZ Surveillance and Guard Robot “Apache Rules the Night” Urban Sniper

31 AFOSR – January 2012 Video Results available on Website Videos available at: Ethical Governor (Short 2:06) (Long 6:11)(Short 2:06)(Long 6:11) Operator Overrides (6:00)(6:00) Moral Emotions (4:16)(4:16)

32 AFOSR – January 2012 Open Research Questions Regarding Autonomy and Lethality The use of proactive tactics or intelligence to enhance target discrimination. Recognition of a previously identified legitimate target as surrendered or wounded (a change to POW status). Fully automated combatant/noncombatant discrimination in battlefield conditions. Proportionality optimization using the Principle of Double Intention over a given set of weapons systems and methods of employment In-the-field assessment of military necessity. Practical planning in the presence of moral constraints and the need for responsibility attribution. The establishment of benchmarks, metrics, and evaluation methods for ethical/moral agents. Real-time situated ethical operator advisory systems embedded with warfighters to remind them of the consequences of their actions.

33 AFOSR – January 2012 Summary 1.There remain many challenging research questions regarding lethality and autonomy yet to be resolved. 2.Roboticists should not run from the difficult ethical issues surrounding the use of their intellectual property that is or will be applied to warfare, whether or not they directly participate. 3.Proactive management of these issues is necessary. 4.Prototype architecture has been implemented and successfully tested in simulated mission scenarios. 5.Generalization to other ethical domains proposed.

34 AFOSR – January 2012 For further information... Governing Lethal Behavior in Autonomous Robots Chapman and Hall May 2009 Mobile Robot Laboratory Web site l l Multiple relevant papers available IEEE RAS Technical Committee on Robo-ethics IEEE Social Implications of Technology Society CS 4002 – Robots and Society Course (Georgia Tech)

35 AFOSR – January 2012

36 [Economist, 6/7/07]

37 AFOSR – January 2012 NBC Nightly News Report 9/13/06

38 AFOSR – January 2012 Apache Rules the Night

39 AFOSR – January 2012 Samsung Techwin Korean DMZ Surveillance and Guard Robot

40 AFOSR – January 2012 Urban Sniper Scenario From Full Metal Jacket (1987) Ft. Benning McKenna Mout Site (2005)

41 AFOSR – January 2012 Within last year alone : 2/16/11: Watson outsmarts human champions on Jeopardy! 4/14/11: Brazil’s augmented eyeglasses for identifying terrorists/criminals at Olympics - claims 400 faces/sec with 46K biometric points/face at up to 50yds 6/23/11: Nevada Gives Green Light to Self-Driving Cars – Google claims will be safer than human drivers


Download ppt "AFOSR – January 2012 Who Do You Trust? Inappropriate Trust and War Crimes: A Case for Ethical Autonomous Systems Ronald C. Arkin Mobile Robot Laboratory."

Similar presentations


Ads by Google