Presentation on theme: "AGI-08 Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture Part 2: Formalization for Ethical Control Ronald."— Presentation transcript:
AGI-08 Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture Part 2: Formalization for Ethical Control Ronald C. Arkin Mobile Robot Laboratory Georgia Institute of Technology
AGI-08 Robots in the Battlefield South Korean robot platform is intended to be able to detect and identify targets in daylight within a 4km radius, or at night using infrared sensors within a range of 2km, providing for either an autonomous lethal or non-lethal response. The system does have an automatic mode in which it is capable of making the decision on its own iRobot, the maker of Roomba, is now providing versions of their Packbots capable of tasering enemy combatants. The SWORDS platform developed by Foster-Miller is already at work in Iraq and Afghanistan and is capable of carrying lethal weaponry (M240 or M249 machine guns, or a Barrett.50 Caliber rifle). Israel is deploying stationary robotic gun-sensor platforms along its borders with Gaza in automated kill zones, equipped with fifty caliber machine guns and armored folding shields. Lockheed-Martin, as part of its role in the Future Combat Systems program is developing an Armed Robotic Vehicle-Assault (Light) MULE robot weighing in at 2.5 tons. It will be armed with a line-of-sight gun and an anti-tank capability, to provide “immediate, heavy firepower to the dismounted soldier”. The U.S. Air Force has created their first hunter-killer UAV, named the MQ-9 Reaper. The U.S. Navy for the first time is requesting funding for acquisition in 2010 of armed Firescout UAVs, a vertical-takeoff and landing tactical UAV that will be equipped with kinetic weapons. The system has already been tested with 2.75 inch unguided rockets.
AGI-08 Will Robots be Permitted to Autonomously Employ Lethal Force? Several robotic systems already use lethal force: l Cruise Missiles, Navy Phalanx (Aegis-class Cruisers), Patriot missile, even land mines by some definitions. Depends on when and who you talk to. Will there always be a human in the loop? Fallibility of human versus machine. Who knows better? Despite protestations to the contrary from all sides, the answer appears to be unequivocally yes.
AGI-08 Underlying Thesis: Robots can ultimately be more humane than human beings in military situations It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of.
AGI-08 Report from Surgeon General’s Office, Mental Health Advisory Team (MHAT) IV Operation Iraqi Freedom 05-07, Final Report, Nov. 17, 2006. Approximately 10% of Soldiers and Marines report mistreating non-combatants (damaged/destroyed Iraqi property when not necessary or hit/kicked a non-combatant when not necessary). Soldiers that Only 47% of Soldiers and 38% of Marines agreed that non-combatants should be treated with dignity and respect. Well over a third of Soldiers and Marines reported torture should be allowed, whether to save the life of a fellow Soldier or Marine or to obtain important information about insurgents. 17% of Soldiers and Marines agreed or strongly agreed that all noncombatants should be treated as insurgents. Just under 10% of soldiers and marines reported that their unit modifies the ROE to accomplish the mission. 45% of Soldiers and 60% of Marines did not agree that they would report a fellow soldier/marine if he had injured or killed an innocent noncombatant. Only 43% of Soldiers and 30% of Marines agreed they would report a unit member for unnecessarily damaging or destroying private property. Less than half of Soldiers and Marines would report a team member for an unethical behavior. A third of Marines and over a quarter of Soldiers did not agree that their NCOs and Officers made it clear not to mistreat noncombatants. Although they reported receiving ethical training, 28% of Soldiers and 31% of Marines reported facing ethical situations in which they did not know how to respond. Soldiers and Marines are more likely to report engaging in the mistreatment of Iraqi noncombatants when they are angry, and are twice as likely to engage in unethical behavior in the battlefield than when they have low levels of anger. Combat experience, particularly losing a team member, was related to an increase in ethical violations.
AGI-08 Reasons for Ethical Autonomy In the future autonomous robots may be able to perform better than humans under battlefield conditions: The ability to act conservatively: i.e., they do not need to protect themselves in cases of low certainty of target identification. The eventual development and use of a broad range of robotic sensors better equipped for battlefield observations than humans’ currently possess. They can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events. Avoidance of the human psychological problem of “scenario fulfillment” is possible, a factor believed partly contributing to the downing of an Iranian Airliner by the USS Vincennes in 1988 [Sagan 91]. They can integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time. When working in a team of combined human soldiers and autonomous systems, they have the potential capability of independently and objectively monitoring ethical behavior in the battlefield by all parties and reporting infractions that might be observed..
AGI-08 Reasons Against Autonomy Responsibility – who’s to blame? (Sparrow, Sharkey, Asaro) Threshold of entry lower – violates jus ad bellum (Asaro) Risk-free warfare – unjust Can’t be done right - too hard for machines to discriminate (Sharkey, Sparrow, Anderson) Effect on squad cohesion Robots running amok (Sci fi) Refusing an order (military) Issues of overrides in wrong hands Co-opting of effort by military for justification (Sharkey) Winning hearts and minds Proliferation, e.g. terrorist organizations
AGI-08 Objective: Robots that possess ethical code 1.Provided with the right of refusal for an unethical order 2.Monitor and report behavior of others 3.Incorporate existing laws of war, battlefield and military protocols l Geneva and Hague Conventions l Rules of Engagement
AGI-08 Architectural Desiderata 1.Permission to kill alone is inadequate, the mission must explicitly obligate the use of lethal force. 2.The Principle of Double Intention, which extends beyond LOW requirement for principle of Double Effect, is enforced. 3.In appropriate circumstances, tactics can be used to encourage surrender over lethal force, which is feasible due to the reduced requirement of self-preservation. 4.Strong evidence of hostility required (fired upon or clear hostile intent), not simply possession or display of weapon. Tactics can be used to determine hostile intent without premature use of lethal force (e.g., close approach, inspection). 5.For POWs, the system has no lingering anger after surrender, reprisals not possible. 6.There is never intent to target a noncombatant. 7.Proportionality may be more effectively determined given the absence of a strong requirement for self-preservation, reducing a need for overwhelming force. 8.A system request to invoke lethality, triggers an ethical evaluation. 9.Adherence to the principle of “first, do no harm”, which indicates that in the absence of certainty (as defined by τ ) the system is forbidden from acting in a lethal manner.
AGI-08 Principle of Double Intention Weapon Selection Firing Pattern Discrimination Target identified As legitimate combatant Proportionality Tactics to engage target Approach and stand-off distance Responsibility Human grants use of autonomous lethal force in given situation (pre-mission) Target Engaged Military Necessity Establishes criteria for targeting
AGI-08 Ethical Architectural Components Ethical Governor: which suppresses, restricts, or transforms any lethal behavior ρ lethal-ij (ethical or unethical) produced by the existing architecture so that it must fall within P permissible after it is initially generated by the architecture (post facto). This means if ρ l-unethical-ij is the result, it must either nullify the original lethal intent or modify it so that it fits within the ethical constraints determined by C, i.e., it is transformed to ρ permissible-ij. Ethical Behavioral Control: which constrains all active behaviors (β 1, β 2, … β m ) in B to yield R with each vector component r oermissible-i as determined by C, i.e., only lethal ethical behavior is produced by each individual active behavior involving lethality in the first place. Ethical Adaptor: if a resulting executed behavior is determined to have been unethical, i.e., ρ l-unethical-ij, then use some means to adapt the system to either prevent or reduce the likelihood of such a reoccurrence and propagate it across all similar autonomous systems (group learning), e.g., an artificial affective function (e.g., guilt, remorse, grief) Responsibility Advisor: Advises operate of responsibilities prior to Mission Deployment and monitors for constraint violations during mission
AGI-08 Test Scenarios UAV l Scenario 1: ROE adherence Taliban Muster – Real World Event l Scenario 2: LOW adherence Iraqi IED Deployment – Real World Event UGV l Scenario 3: Discrimination Near-term Event – Korean DMZ l Scenario 4: Proportionality and Tactics Fictional – Urban Sniper
AGI-08 Summary 1.Roboticists should not run from the difficult ethical issues surrounding the use of their intellectual property that is or will be applied to warfare, whether or not they directly participate. Wars unfortunately will continue and derivative technology from these ideas will be used. 2.Proactive management of these issues is necessary. 3.Survey results are available that indicate opinions on the use of autonomy and lethality. 4.Candidate architecture currently being implemented and tested this year.
AGI-08 For further information... Mobile Robot Laboratory Web site l http://www.cc.gatech.edu/ai/robot-lab/ l Two lengthy tech reports available u Survey Results u Architectural design (forthcoming book) Contact information u Ron Arkin: email@example.com IEEE RAS Technical Committee on Robo-ethics http://www-arts.sssup.it/IEEE_TC_RoboEthics CS 4002 – Robots and Society Course (Georgia Tech) http://www.cc.gatech.edu/classes/AY2008/cs4002a_spring/