Presentation is loading. Please wait.

Presentation is loading. Please wait.

Intelligent Information Technology Research Lab, Acadia University, Canada 1 Getting a Machine to Fly Learn Extending Our Reach Beyond Our Grasp Daniel.

Similar presentations


Presentation on theme: "Intelligent Information Technology Research Lab, Acadia University, Canada 1 Getting a Machine to Fly Learn Extending Our Reach Beyond Our Grasp Daniel."— Presentation transcript:

1 Intelligent Information Technology Research Lab, Acadia University, Canada 1 Getting a Machine to Fly Learn Extending Our Reach Beyond Our Grasp Daniel L. Silver Acadia University, Wolfville, NS, Canada

2 Intelligent Information Technology Research Lab, Acadia University, Canada 2

3 Key Take Away A major challenge in artificial intelligence has been how to develop common background knowledge Machine learning systems are beginning to make head-way in this area Taking first steps to capture knowledge that can be used for future learning, reasoning, etc. 3

4 Intelligent Information Technology Research Lab, Acadia University, Canada Outline Learning – What is it? History of Machine Learning Framework and Methods ML Application Areas Recent and Future Advances Challenges and Open Questions 4

5 Intelligent Information Technology Research Lab, Acadia University, Canada What is Learning? Animals and Humans ① Learn using new experiences and prior knowledge ② Retain new knowledge from what is learned ③ Repeat starting at 1. Essential to our survival and thriving 5

6 Intelligent Information Technology Research Lab, Acadia University, Canada What is Learning? Four hours of learning in two minutes 6

7 Intelligent Information Technology Research Lab, Acadia University, Canada What is Learning? (A little more formally) Inductive inference/modeling Developing a general model/hypothesis from examples Objective is to achieve good generalization for making estimates/predictions It’s like … Fitting a curve to dataFitting a curve to data Also considered modeling the data Statistical modeling 7

8 Intelligent Information Technology Research Lab, Acadia University, Canada What is Learning? Generalization through learning is not possible without an inductive bias = a heuristic beyond the data

9 Intelligent Information Technology Research Lab, Acadia University, Canada 9 Inductive Bias ASH ST THI RDSEC OND ELM ST FIR ST PINE ST OAK ST Inductive bias depends upon: Having prior knowledge Selection of most related knowledge Human learners use Inductive Bias

10 Intelligent Information Technology Research Lab, Acadia University, Canada What is Learning? Requires an inductive bias = a heuristic beyond the data Do you know any inductive biases? How do you choose which to use?

11 Intelligent Information Technology Research Lab, Acadia University, Canada Inductive Biases Universal heuristics - Occam’s Razor Knowledge of intended use – Medical diagnosis Knowledge of the source - Teacher Knowledge of the task domain Analogy with previously learned tasks Tom Mitchell, 1980

12 Intelligent Information Technology Research Lab, Acadia University, Canada What is Machine Learning? The study of how to build computer programs that: Improve with experience Generalize from examples Self-program, to some extent

13 Intelligent Information Technology Research Lab, Acadia University, Canada History of Machine Flight Early era (1783) – gliders, lighter than air, early attempts at power heavier than air Pioneer ear (1900) – zepplin, Wright brothers, erly military use Golden age (1918) – WW1, Amelia Earhart, Charles Lindbergh WW2 Advances (1938) – jet engine, helicopters Cold War (1945) – Boeing 707, sound barrier, space flight, moon landing, space shuttle Present (2001) – stealth, commercial space travel, unmanned drones, autonomous flight 13

14 Intelligent Information Technology Research Lab, Acadia University, Canada History of Machine Learning PDP Group Multi-layer Perceptrons, New apps Renaissance 1990 AI Success Data mining, Web mining, User models, New alg., Google Present Big Data, Web Analytics, Parallel alg., Cloud comp., Deep learning Advances 1890 William James, Neuronal learning Origins 1940 Donald Hebb, Math models, The Perceptron Limited value Promise 1960 Minsky & Papert paper, Research wanes Hiatus 1970 Genetic alg, Version spaces, Decision Trees Exploration

15 Intelligent Information Technology Research Lab, Acadia University, Canada History of ML Creation: 1890: William James - defined a neuronal process of learning Promising Technology: 1943: McCulloch and Pitts - earliest mathematical neuron models 1952: Arthur Samuel - Checkers playing program 1954: Donald Hebb and IBM research group - earliest ANN simulations 1958: Frank Rosenblatt - The Perceptron Disenchantment: 1969: Minsky and Papert – Perceptrons have severe limitations Symbolic and other forms of Machine learning 1975: John Holland – Genetic Algorithms 1978 Tom Mitchell - Version Spaces 1979 Ross Quinlan – Inductive Decision Trees 1984: Leo Breiman – Classificaton and Regression Trees Re-emergence: 1985: Multi-layer nets that use back-propagation 1986: PDP Research Group - multi-disciplined approach 15

16 Intelligent Information Technology Research Lab, Acadia University, Canada History of ML Re-emergence: 1985: Multi-layer neural nets that use back-propagation 1986: Resurgence of ML methods and applications AI Success Story: 1993: Data mining of customer data makes ML a big hit 2000: Web mining using ML algorithms common place 2005: Google runs/depends on machine learning Present: 2010: Organization / summary of web documents 2012: Big Data, business analytics, parallel algorithms 16

17 Intelligent Information Technology Research Lab, Acadia University, Canada Of Interest to Several Disciplines Computer Science – theory of computation, new algorithms Math - advances in statistics, information theory Psychology – as models for human learning, knowledge acquisition and retention Biology – how does a nervous system learn Physics – analogy to physical systems Philosophy – epistemology, knowledge acquisition Application Domains – new knowledge extracted from data, solutions to unsolved problems 17

18 Intelligent Information Technology Research Lab, Acadia University, Canada The Mathematics of ML Probability – Bayesian methods p(h|d)= p(d|h) p(h) / p(d) Statistics – Regression, Decision Trees, Clustering Fuzzy logic – Extension of Boolean logic Calculus – Remember those wonderful differential equations? Thermodynamics – Boltzman, Helmholtz An interesting quote from Johann von Neumann … 18

19 Intelligent Information Technology Research Lab, Acadia University, Canada Johann von Neumann’s Opinion “All of this will lead to theories [of computation] which are much less rigidly of an all-or-none nature than past and present a formal logic …. There are numerous indications to make us believe that this new system of formal logic will move closer to another discipline which has been little linked in the past with logic. This is thermodynamics, primarily in the form it was received from Boltzmann, and is that part of theoretical physics which comes nearest in some of its aspects to manipulating and measuring information.” John Von Neumann, Collected Works, Vol.5, p

20 Intelligent Information Technology Research Lab, Acadia University, Canada Classes of ML Methods Supervised – Develops models that predict the value of one variable from one or more others: Artifical Neural Networks, Inductive Decision Trees, Genetic Algorithms, k-Nearest Neighbour, Bayesian Networks, Support Vectors Machines Unsupervised – Generates groups or clusters of data that share similar features K-Means, Self-organizing Feature Maps Reinforcement Learning – Develops models from the results of a final outcome; eg. win/loss of game TD-learning, Q-learning (related to Markov Decision Processes) Hybrids – eg. semi-supervised learning

21 Intelligent Information Technology Research Lab, Acadia University, Canada Focus: Supervised Learning Function approximation (curve fitting) Classification (concept learning, pattern recognition) x1 x2 A B f(x) x 21

22 Intelligent Information Technology Research Lab, Acadia University, Canada Linear and Non-Linear Problems Linear Problems Linear functions Linearly separable classifications Non-linear Problems Non-linear functions Not linearly separable classifications x1 x2 A B x1 x2 f(x) x A B B 22

23 Intelligent Information Technology Research Lab, Acadia University, Canada 23 Supervised Machine Learning Framework Instance Space X Training Examples Testing Examples ( x, f(x)) Model of Classifier h Inductive Learning System h(x) ~ f(x)

24 Intelligent Information Technology Research Lab, Acadia University, Canada Supervised Machine Learning Problem: We wish to learn to classifying two people (A and B) based on their keyboard typing. Approach: Acquire lots of typing examples from each person Extract relevant features - representation! M = number of mistakes T = typing time Transform feature representation as needed Use an algorithm to fit a model to the data - search! Test the model on an independent set of examples of typing from each person

25 Intelligent Information Technology Research Lab, Acadia University, Canada Classification Mistakes Typing Speed A B B B B B B B BB B B B B B B B BB B B A A A A A A A A A A A A A A A A A A A B B B B B B B B B Logistic Regression Y Y=f(M,T) 0 1 MT Y

26 Intelligent Information Technology Research Lab, Acadia University, Canada Classification A B B B B B B B BB B B B B B B B BB B B A A A A A A A A A A A A A A A A A A A B B B B B B B B B Artificial Neural Network A Mistakes Typing Speed MT Y …

27 Intelligent Information Technology Research Lab, Acadia University, Canada Classification A B B B B B B B BB B B B B B B B BB B B A A A A A A A A A A A A A A A A A A B B B B B B B B B Inductive Decision Tree A A Mistakes Typing Speed M? T? Root Leaf A B Blood Pressure Example

28 Intelligent Information Technology Research Lab, Acadia University, Canada Classification A B B B B B B B BB B B B B B B B BB B B A A A A A A A A A A A A A A A A A A A B B B B B B B B B k Nearest Neighbour A Mistakes Typing Speed Classification is based on majority vote of nearest neighbours. 28

29 Intelligent Information Technology Research Lab, Acadia University, Canada Unsupervised Learning No target class is provided for each example, just inputs Objective is to cluster similar examples into categories or generate a more abstract representation for an example Example: K-Means Algorithm K cluster heads are randomly positioned in input space Examples are iteratively added and categorized to the nearest cluster The mean of each cluster head is calculated based on the input values of each example in the cluster 29

30 Intelligent Information Technology Research Lab, Acadia University, Canada Unsupervised Learning A B B B B B B B BB B B B B B B B BB B B A A A A A A A A A A A A A A A A A A A B B B B B B B B B K-Means Clustering A Mistakes Typing Speed Two clusters chosen. Cluster head is iteratively positioned after each new example categorized and added

31 Intelligent Information Technology Research Lab, Acadia University, Canada Application Areas Data Mining: Science and medicine: prediction, diagnosis, pattern recognition, forecasting Manufacturing: process modeling and analysis Marketing and Sales: targeted marketing, segmentation Finance: portfolio trading, investment support Banking & Insurance: credit and policy approval Security: bomb, iceberg, fraud detection Engineering: dynamic load shedding, pattern recognition 31

32 Intelligent Information Technology Research Lab, Acadia University, Canada Application Areas Web mining – information filtering and classification, social media predictive modeling User Modeling – adaptive user interfaces, speech/gesture recognition Intelligent Personal Agents – spam filtering, fashion consultant, Robotics – image recognition, adaptive control, autonomous vehicles (space, under-sea) Military/Defense – target acquisition and classification, tactical recommendations, cyber attack detection 32

33 Intelligent Information Technology Research Lab, Acadia University, Canada Recent and Future Advances Robotics Neuroprosthetics Lifelong Machine Learning Deep Learning Architectures ML and Growing Computing Power NELL – Never-Ending Language Learner Cloud-based Machine Learning 33

34 Intelligent Information Technology Research Lab, Acadia University, Canada OASIS: Onboard Autonomous Science Investigation System Since early 2000’s Goal: To evaluate, and autonomously act upon, science data gathered by spacecraft Including planetary landers and rovers 34

35 Intelligent Information Technology Research Lab, Acadia University, Canada Stanford’s Sebastian Thrun holds a $2M check on top of Stanley, a robotic Volkswagen Touareg R5 212 km autonomus vehicle race, Nevada Stanley completed in 6h 54m Four other teams also finished Source: Associated Press – Saturday, Oct 8, 2005 DARPA Grand Challenge

36 Intelligent Information Technology Research Lab, Acadia University, Canada The Competition 36

37 Intelligent Information Technology Research Lab, Acadia University, Canada Autonomous Underwater Vehicles Arctic Explorer AUV designed and built by International Submarine Engineering Ltd. (ISE) of Port Coquitlam, B.C. Used to map the sea floor underneath the Arctic ice shelf in support of Canadian land claims under the UN Convention on the Law of the Sea. Various military uses; e.g. mine detection, elimination (Source: ISE, Mae Seto) 37

38 Intelligent Information Technology Research Lab, Acadia University, Canada AUV Use in Military Reduce dependence on operator Adapt to unstructured dynamic ocean environment and evolving robot states Adaptive decision-making and mission re- planning Image recognition of objects Learns robot control policy versus being programmed 38

39 Intelligent Information Technology Research Lab, Acadia University, Canada Literally Extending Our Reach – Neuroprosthetic Decoders Dec, 2012 Andy Schwart, Univ. of Pittsburgh Jan Scheuermann, quadriplegic Brain-machine interface, 96 electrodes 13 weeks of training High-performance neuroprosthetic control by an individual with tetraplegia, The Lancet, v381, p , Feb

40 Intelligent Information Technology Research Lab, Acadia University, Canada 40 Lifelong Machine Learning (LML) Considers methods of retaining and using learned knowledge to improve the effectiveness and efficiency of future learning We investigate systems that must learn: From impoverished training sets For diverse domains of tasks Where practice of the same task happens Applications: Intelligent Agents, Robotics, User Modeling, DM

41 Intelligent Information Technology Research Lab, Acadia University, Canada 41 Supervised Machine Learning Framework Instance Space X Training Examples Testing Examples ( x, f(x)) Model of Classifier h Inductive Learning System h(x) ~ f(x) After model is developed and used it is thrown away.

42 Intelligent Information Technology Research Lab, Acadia University, Canada 42 Lifelong Machine Learning Framework Instance Space X Training Examples Testing Examples ( x, f(x)) Model of Classifier h Inductive Learning System short-term memory h(x) ~ f(x) Domain Knowledge long-term memory Retention & Consolidation Inductive Bias Selection Knowledge Transfer

43 Intelligent Information Technology Research Lab, Acadia University, Canada 43 Lifelong Machine Learning Framework Instance Space X Training Examples Testing Examples ( x, f(x)) Model of Classifier h Inductive Learning System short-term memory h(x) ~ f(x) Domain Knowledge long-term memory Retention & Consolidation Inductive Bias Selection Knowledge Transfer

44 Intelligent Information Technology Research Lab, Acadia University, Canada 44 Lifelong Machine Learning One Implementation Instance Space X Training Examples Testing Examples ( x, f(x)) Model of Classifier h h(x) ~ f(x) Retention & Consolidation Knowledge Transfer f2(x)f2(x) x1x1 xnxn f1(x)f1(x) f5(x)f5(x) Multiple Task Learning (MTL) Inductive Bias Selection f3(x)f3(x)f2(x)f2(x) … f9(x)f9(x) fk(x)fk(x) Consolidated MTL Domain Knowledge long-term memory

45 Intelligent Information Technology Research Lab, Acadia University, Canada 45 Multiple Task Learning (MTL) f2(x)f2(x) x1x1 xnxn f1(x)f1(x) Task specific representation Common internal Representation [Caruana, Baxter] Common feature layer fk(x)fk(x) Multiple hypotheses develop in parallel within one back- propagation network [Caruana, Baxter 93-95] An inductive bias occurs through shared use of common internal representation Knowledge or Inductive transfer to primary task f 1 (x) depends on choice of secondary tasks

46 Intelligent Information Technology Research Lab, Acadia University, Canada 46 Lifelong Machine Learning via MTL & Task Rehearsal Short Term Learning Network y2y2 y3y3 x1x1 xnxn f1(x)f1(x) Virtual examples from related prior tasks for knowledge transfer Virtual Examples of f 1 (x) for Long-term Consolidation y5y5 y4y4 y5y5 y6y6 x1x1 xnxn Long-term Consolidated Domain Knowledge y3y3 y2y2 f1(x)f1(x) Rehearsal of virtual examples for y 2 –y 6 ensures knowledge retention 1.Lots of internal representation 2.Rich set of virtual training examples 3.Small learning rate = slow learning 4.Validation set to prevent growth of high magnitude weights [Poirier04]

47 Intelligent Information Technology Research Lab, Acadia University, Canada 47 Lifelong Learning with MTL Band Domain Logic Domain Coronary Artery Disease A B C D Mean Percent Misclass.

48 Intelligent Information Technology Research Lab, Acadia University, Canada 48 An Environmental Example Stream flow rate prediction [Lisa Gaudette, 2006] x = weather data f(x) = flow rate

49 Intelligent Information Technology Research Lab, Acadia University, Canada 49 Context Sensitive MTL (csMTL) x1x1 xnxn c1c1 ckck Primary Inputs x One output for all tasks y’=f’(c,x) Context Inputs c We have developed an alternative approach that is meant to overcome limitations of MTL networks: Uses a single output neural network structure; liminates redundant outputs for the same task Context inputs associate an example with a task using environmental cues All weights are shared - focus shifts from learning separate tasks to learning a fluid domain of task knowledge index by the context inputs Acommodates tasks that have multiple outputs

50 Intelligent Information Technology Research Lab, Acadia University, Canada 50 Context Sensitive MTL (csMTL) Overcomes limitations of standard MTL for long-term consolidation of tasks: Eliminates redundant outputs for the same task Facilitates accumulation of knowledge through practice Examples can be associated with tasks directly by the environment Develops a fluid domain of task knowledge indexed by the context inputs Acommodates tasks that have multiple outputs x1x1 xnxn c1c1 ckck Primary Inputs x One output for all tasks y’=f’(c,x) Context Inputs c (Silver, Poirier and Currie, 2008)

51 Intelligent Information Technology Research Lab, Acadia University, Canada 51 csMTL Empirical Studies Results from Repeated Studies

52 Intelligent Information Technology Research Lab, Acadia University, Canada Lifelong Machine Learning with csMTL Example: Learning to Learn how to transform images Requires methods of efficiently & effectively Retaining transform model knowledge Using this knowledge to learn new transforms (Silver and Tu, 2010) 52

53 Intelligent Information Technology Research Lab, Acadia University, Canada csMTL and Tasks with Multiple Outputs Liangliang Tu (2010) Image Morphing: Inductive transfer between tasks that have multiple outputs Transforms 30x30 grey scale images using transfer learning 53

54 Intelligent Information Technology Research Lab, Acadia University, Canada csMTL and Tasks with Multiple Outputs 54

55 Intelligent Information Technology Research Lab, Acadia University, Canada Lifelong Machine Learning with csMTL 55 Demo

56 Intelligent Information Technology Research Lab, Acadia University, Canada 56 x1x1 xnxn c1c1 ckck Task Context Standard Inputs Long-term Consolidated Domain Knowledge Network f 1 (c,x) Short-term Learning Network Representational transfer from CDK for rapid learning Functional transfer virtual examples) for consolidation f’(c,x) A LML based on csMTL One output for all tasks (Fowler and Silver, 2010) Stability-Plasticity Problem

57 Intelligent Information Technology Research Lab, Acadia University, Canada Deep Learning Architectures Hinton and Bengio (2007+) Learning deep architectures of neural networks Layered networks of unsupervised auto- encoders efficiently develop hierarchies of features that capture regularities in their respective inputs Used to develop models for families of tasks 57

58 Intelligent Information Technology Research Lab, Acadia University, Canada Deep Learning Architectures Consider the problem of trying to classify these hand-written digits.

59 Intelligent Information Technology Research Lab, Acadia University, Canada Deep Learning Architectures 2000 top-level artificial neurons neurons (higher level features) 500 neurons (higher level features) 500 neurons (low level features) 500 neurons (low level features) Images of digits 0-9 (28 x 28 pixels) Images of digits 0-9 (28 x 28 pixels) Neural Network: - Trained on 40,000 examples - Learns: * labels / recognize images * generate images from labels - Probabilistic in nature - DemoDemo 2 3 1

60 Intelligent Information Technology Research Lab, Acadia University, Canada ML and Computing Power Moores Law Expected to accelerate as the power of computers move to a log scale with use of multiple processing cores 60

61 Intelligent Information Technology Research Lab, Acadia University, Canada ML and Computing Power IBMs Watson – Jeopardy, Feb, 2011: Massively parallel data processing system capable of competing with humans in real-time question- answer problems 90 IBM Power-7 servers Each with four 8-core processors 15 TB (220M text pages) of RAM Tasks divided into thousands of stand-alone jobs distributed among 80 teraflops (1 trillion ops/sec) Uses a variety of AI approaches including machine learning 61

62 Intelligent Information Technology Research Lab, Acadia University, Canada ML and Computing Power Andrew Ng’s work on Deep Learning Networks (ICML-2012) Problem: Learn to recognize human faces, cats, etc from unlabeled data Dataset of 10 million images; each image has 200x200 pixels 9-layered locally connected neural network (1B connections) Parallel algorithm; 1,000 machines (16,000 cores) for three days 62 Building High-level Features Using Large Scale Unsupervised Learning Quoc V. Le, Marc’Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg S. Corrado, Jeffrey Dean, and Andrew Y. Ng ICML 2012: 29th International Conference on Machine Learning, Edinburgh, Scotland, June, 2012.

63 Intelligent Information Technology Research Lab, Acadia University, Canada ML and Computing Power Results: A face detector that is 81.7% accurate Robust to translation, scaling, and rotation Further results: 15.8% accuracy in recognizing 20,000 object categories from ImageNet 70% relative improvement over the previous state-of-the-art. 63

64 Intelligent Information Technology Research Lab, Acadia University, Canada Never-Ending Language Learner Carlson et al (2010) Each day: Extracts information from the web to populate a growing knowledge base of language semantics Learns to perform this task better than on previous day Uses a MTL approach in which a large number of different semantic functions are trained together 64

65 Intelligent Information Technology Research Lab, Acadia University, Canada Challenges & Open Questions Stability-Plasticity problem - How do we integrate new knowledge in with old? No loss of new knowledge No loss or prior knowledge Efficient methods of storage and recall ML methods that can retain learned knowledge will be approaches to “common knowledge” representation – a “Big AI” problem 65

66 Intelligent Information Technology Research Lab, Acadia University, Canada Challenges & Open Questions Practice makes perfect ! An LML system must be capable of learning from examples of tasks over a lifetime Practice should increase model accuracy and overall domain knowledge How can this be done? Research important to AI, Psych, and Education 66

67 Intelligent Information Technology Research Lab, Acadia University, Canada Challenges & Open Questions Computational Curricula Insight into curriculum and training sequences Best practices for rapid, accurate learning Best practices for knowledge consolidation Of interest to AI and Education 67

68 Intelligent Information Technology Research Lab, Acadia University, Canada Challenges & Open Questions Scalability Often a difficult but important challenge Must scale with increasing: Number of inputs and outputs Number of training examples Number of tasks Complexity of tasks, size of hypothesis representation Preferably, linear growth 68

69 Intelligent Information Technology Research Lab, Acadia University, Canada Cloud-Based ML - Google 69 https://developers.google.com/prediction/

70 Intelligent Information Technology Research Lab, Acadia University, Canada Challenges & Open Questions Applications in software agents and robots Examples encountered periodically, intermittently Practice is often necessary Consolidation of new knowledge with old is needed for continual learning Opportunity to test theories on curricula 70

71 Intelligent Information Technology Research Lab, Acadia University, Canada Machine Flight vs. Machine Learning 71 FactorMachine FlightMachine Learning EffectivenessTravel higher, fatherLearn more things, accurately To places not reachableModel complex phenomena EfficiencyTravel fasterLearn faster Lower cost SatisfactionSafe travel, beautyConfidence, elegance Reach the moon, and beyond Reach new knowledge, solve new problems

72 Intelligent Information Technology Research Lab, Acadia University, Canada 72 Thank You!

73 Intelligent Information Technology Research Lab, Acadia University, Canada 73 Getting a Machine to Fly Learn Extending Man’s Reach Beyond His Grasp Daniel L. Silver Acadia University, Wolfville, NS, Canada

74 Intelligent Information Technology Research Lab, Acadia University, Canada 74

75 Intelligent Information Technology Research Lab, Acadia University, Canada Machine Learning (ML) is the study of how to build systems that can automatically learn and improve with experience similar to humans. Since the early 1980’s there have been significant advances in ML that have affected things such as marketing, banking and stock trading, manufacturing, household appliances, national defense, automobiles, medicine and health care, and most recently the Internet, search engines and mobile devices. ML is poised to extend man’s mental reach in the virtual world of the 21 st century in the same way as flight extended his physical reach in 20 th century – it provides the means to filter massive amounts of data, recognize complex patterns, and rapidly make difficult decisions. This lecture will present the fundamentals of ML, beginning with human learning and its relationship to statistical modeling, inductive bias and the need to retain learned knowledge. The history of ML research is reviewed emphasizing its multidisciplinary nature involving computing, mathematics, physics, psychology and neuroscience. The basic framework for machine learning is presented, various ML methods are outlined and demonstrated, and several current and surprising ML applications are discussed. The talk concludes with a look to the future of ML as it takes flight into the next 10 years. about 50 minutes long, with an additional Q/A 75

76 Intelligent Information Technology Research Lab, Acadia University, Canada Outline Machine Learning Overview beginning with human learning and its relationship to statistical modeling, inductive bias and the need to retain learned knowledge The history of ML research is reviewed emphasizing its multidisciplinary nature involving computing, mathematics, physics, psychology and neuroscience. The basic framework for machine learning is presented, various ML methods are outlined and demonstrated Current and surprising ML applications Application Areas Future of ML as it takes flight into the next 10 years. Advances and Futures Lifelong Machine Learning Deep Learning Architectures NELL and Cloud-based Machine Learning 76

77 Intelligent Information Technology Research Lab, Acadia University, Canada Lifelong Machine Learning (LML) It is now appropriate to seriously consider the nature of systems that learn over a lifetime Motivation / Rationale: Body of related work on which to build Power and low cost of modern computers Challenges and benefits of research to the areas of AI, brain sciences, and human learning 77

78 Intelligent Information Technology Research Lab, Acadia University, Canada “Ah, but a man's reach should exceed his grasp, or what's a heaven for?” Robert Browning 78


Download ppt "Intelligent Information Technology Research Lab, Acadia University, Canada 1 Getting a Machine to Fly Learn Extending Our Reach Beyond Our Grasp Daniel."

Similar presentations


Ads by Google