Presentation is loading. Please wait.

Presentation is loading. Please wait.

Trust & Explainability for Artificial Intelligence

Similar presentations


Presentation on theme: "Trust & Explainability for Artificial Intelligence"— Presentation transcript:

1 Trust & Explainability for Artificial Intelligence
- Lt Col Ashutosh Verma Artificial Intelligence (AI) has regained centre stage in last few years. Military application utilising geo spatial datasets and deep learning techniques based on convoluted neural networks (CNN) have also matured to a deployable state in developed countries. But they are particularly opaque in their internal working and it is difficult to understand why a particular suggestion / decision has been made. Advances in this field can only be adopted by military if the algorithm can be trusted and it can provide an explanation for its actions.

2 Scope Trust deficit with AI Applications What is Trust
What if we can Trust Way Ahead Today I am going to discuss how trust and explainability are important for AI application in geospatial space, especially those in military domain in these sections. IoT Congress 2017

3 Our fear and mistrust is based on such failures.
Google autonomous car crash. Tesla autopilot FATAL car crash. Risk assessment for committing crime racially discriminates subjects - COMPASS. Predictive policing – racially profiles the area. Our fear and mistrust is based on such failures. To appreciate the need for trust let us see when in some of the first AI based applications trust was broken and things went wrong. The Google autonomous car which is said to have caused the accident, the Tesla car crashes in autopilot mode which has had fatal results. The machine learning based applications for law enforcement has been found to be racially biased. These are some of the magnificent failure AI has seen, not to mention how funny online translations can be. There have been calls from the public to STOP using these applications. Our fears and mistrust towards these applications are based on such failures. IoT Congress 2017

4 Trust in Military Applications
Signatories:- Stephen Hawkings, Steve Wozniak, Stuart Russell, Nils Nilsson and 20,000+ Military applications with AI in geospatial space or otherwise has a huge potential and here we can see the amount of mistrust which is already there, even before those BIG military AI applications being born!! An open letter was published online in Jul 2015 about the threat of arms race with autonomous weapons. The signatories included the biggest names in AI like Stuart Russell, Nils Nilsson and eminent personalities like Stephen Hawkings, Steve Wozniak and Elon Musk. It can be signed by anyone and over 20 thousand people have also signed it. Well, that does not include me! Press is comparing them to Science Fiction movie like Terminator. Well, thankfully, scientist, like those sitting here on the dais are working towards ‘autonomy’ and not ‘artificial-self-awareness’. There is therefore a need to instil trust such an application in military domain. IoT Congress 2017

5 Necessity of Explanation
Observation Human Understanding Why Why not When Success When Failure Explanation TRUST Fast & Optimal Decision Making Seeking explanation is spontaneous and fundamental activity for human understanding. It is important to able to explain the observations of the actions made by an autonomous agent and understand the ‘why’ of an action taken, ‘why’ certain actions were taken or suggestion were made, when the algorithms will succeed and when will it fail. An explanation framework need to give these answers. These answers can lead us to trust the application and that will be helpful in speeding up the decision making process and optimising it. IoT Congress 2017

6 Trust in AI Algorithms Trust Explainability Auditability Accuracy
Responsibility Fairness Trust The five principles considered core to instilling trust and accountability in an algorithm are:- Accuracy. To minimise the errors in predictions. Auditability. To be able to review past actions and their reasons. Responsibility. A team with ownership of the application. Fairness. Explainability. Ability to provide the answers to ‘why’. Now let us take a closer look in the military domain. IoT Congress 2017

7 Military Decision Making The OODA Loop
Analytics Reasoning Knowledge Base Person-on-the-loop Orient Intelligence Target Recognition Sensor N/W AI Planning Simulate? DSS Observe Decide Person-in-the-loop Act Autonomous Agents weapon platforms Importance of trust and explanation can be best understood with reference to the OODA cycle which reflect military decision making, weather it is a decision taken by a fighter pilot in a dog fight or a strategic manoeuvre being undertaken by a theatre commander. OODA stands for Observe – Orient – Decide – Act which has become basic tenets of manoeuvre warfare. Any autonomous agent or an AI based system will also have to fit in the OODA loop. It is always been an endeavour of the military commander to disrupt the enemy commander’s OODA loop and shorten his own which can be done by employing AI based applications and agents at every facet of the loop. [Click] Observation can be fastened by use of such things as advanced image recognition algorithms to identify targets. Natural language engines that can understand the intercepted enemy communications. Orientation step can collate the obtained intelligence with existing knowledgebase, database, previous observations and inputs by human subject matter experts. A quick decision can be taken based on the observation and its assessment using the AI reasoning and planning based algorithms which can form the decision support mechanism. Autonomous drones and other autonomous platforms can then hasten the action step of the loop. In fact, with sufficient intelligence built into an autonomous platform, the entire loop can be performed within the platform without human intervention. This gamut of AI applications, of course, could work in a large network centric environment or within a small drone? Only time will tell, but the increase in speed will make it impossible for a person to be in the loop, instead the OODA will be working autonomously with the man ‘on-the-loop’. However, the biggest question will be, whether military commanders will accept proliferation of such technologies in military decision making or not. A trustworthy algorithm whose action can be explained will be accepted easily and quickly by military commanders. It can be seamlessly integrated in the decision making which can make own OODA loop faster than the enemy’s. IoT Congress 2017

8 China embracing AI Imagenet Challenge 2016
All category winners were teams from China. 33 of the 82 teams from China. Ref:- Interest in Explainable AI (XAI) will continue to grow as interest in military application of AI is also growing. China has been aggressively investing in AI research over past several years as can be seen from some of the news snippets. Baidu, which is Google to China, is also reportedly collaborating with Chinese military in advancing AI. China might become the first country to truly weaponise AI with their AI enabled missiles which can choose their target and effectively shorten the OODA loop by many magnitude. Imagent challenge, which has now become a sort of benchmark in deep learning image recognition is now dominated with Chinese teams. IoT Congress 2017

9 ‘Strength’ of Chinese Academia
National University of Defence Technology Naval University of Engineering Armoured Engineering Academy Harbin Institute of Technology Chinese Association for Artificial Intelligence{ 863 Programme - National High Tech R&D Programme Project 985 to promote and develop the Chinese higher education system China has deliberately and progressively built its strength over a period of time first strengthening its education system and academia which in turn provided the intellectual capital needed for development of high technology such as AI. They achieved this through their programmes such as 863 Programme - National High Tech R&D Programme and Project 985 to promote and develop the Chinese higher education system. As per SCImago country ranking, China has been publishing more papers than USA in AI category of subject area Computer Science since Some of the leading universities and institution working in the field of AI are shown here. IoT Congress 2017

10 USA “Autonomous software working synergistically with the innovation of empowered airmen.” “Authority and responsibility for warfare in the hands of airmen while creating tools that enhance their situation awareness and decision-making, speed effective actions, and bring needed extensions to their capabilities.” Ref: In 2015 USAF released their vision about autonomous systems in form of Autonomous Horizons, which depicts a path to the future for system autonomy in the USAF. It describes an evolutionary progression that obtains the best benefits of autonomous software working synergistically with the innovation of empowered airmen. They consider the vision to be both obtainable and sustainable – it leaves the authority and responsibility for warfare in the hands of airmen while creating tools that enhance their situation awareness and decision-making, speed effective actions, and bring needed extensions to their capabilities IoT Congress 2017

11 USA: DARPA DARPA Grand Challenges:- Fast Lightweight autonomy.
Autonomous Driving. Autonomous Cyber Defence. Autonomous Spectrum Collaboration. Fast Lightweight autonomy. BOLT – Broad operational language translator. XAI – Explainable AI. DARPA, which has been spearheading innovation and new technology has been working on AI based technology for a long time now with the latest addition on XAI – Explainable Artificial Intelligence. IoT Congress 2017

12 Challenges Complexity of algorithms. Input data quality.
Human introduced bias, e.g. racial profiling. Ref: DARPA-BAA-16-53; Aug 2016 There is tremendous interest in the field of XAI and a lot of work is being done, however, there are difficult challenges also. It is ironic that the problem of explainability in AI can be considered to be result of the success of AI. Algorithms which provide better accuracy are also the one which intrinsically provide the least explanation. Poor quality of the underlying data used for training is also responsible for reduced trust in algorithms. If there are human introduced biases in the data it will reflect in the output of the algorithms too. IoT Congress 2017

13 Trust and explainability necessary ab initio for military application.
Road Ahead XAI Research - DARPA-BAA-16-53 Possible approaches:- AI reasoning, planning and learning. IoT to improve data gathering to remove human induced bias. Trust and explainability necessary ab initio for military application. More work is certainly required to incorporate explainability in algorithms or algorithms need to be designed with explainability intrinsically built into them. This approach may require revisiting reasoning frameworks and incorporating them into learning mechanism like case based reasoning approaches. In certain cases where it is possible to carry out automated data collection Internet of Things (IoT) can help. IoT sensors can collect more accurate and detailed data which can be used to produce more accurate models. Deep Neural Networks and other AI applications utilising geo spatial datasets and providing geo-intelligence are going to proliferate in almost every domain, be it military, medical, law enforcement, entertainment or scientific research. These applications can be best utilised only after the users can fully trust them. While research is still going on to enhance performance in various fields of AI, it will be prudent to think about addressing issues related to trust and explainability ab initio. IoT Congress 2017

14 “Trust is the glue that holds an organisation together & the lubricant that moves it forward.”
– Gen Colin Powell Thank You Trust is the glue that holds an organisation together & the lubricant that moves it forward – said Gen Collin Powel. With AI set to get deeply embedded in our organisation, we will have to learn to make it trustworthy and explainable. IoT Congress 2017


Download ppt "Trust & Explainability for Artificial Intelligence"

Similar presentations


Ads by Google