Presentation is loading. Please wait.

Presentation is loading. Please wait.

2011 INNS IESNN 1 Michigan State University A Computational Introduction to the Brain-Mind Juyang (John) Weng Michigan State University East Lansing, MI.

Similar presentations


Presentation on theme: "2011 INNS IESNN 1 Michigan State University A Computational Introduction to the Brain-Mind Juyang (John) Weng Michigan State University East Lansing, MI."— Presentation transcript:

1 2011 INNS IESNN 1 Michigan State University A Computational Introduction to the Brain-Mind Juyang (John) Weng Michigan State University East Lansing, MI 49924 USA weng@cse.msu.edu

2 2011 INNS IESNN 2 Michigan State University Human Physical and Mental Development Studies on the adult brain Studies on how the brain develops

3 2011 INNS IESNN 3 Michigan State University Machine Mental Development

4 2011 INNS IESNN 4 Michigan State University Totipotency l Stem cells and somatic cells l Genomic equivalence: l All cells are totipotent: whose genome is sufficient to guide the development from a single cell to the entire adult body l Consequence: the developmental program is cell-centered

5 2011 INNS IESNN 5 Michigan State University Genomic Equivalence l Each somatic cell carries the complete genome in its nucleus l Evidence: cloning (e.g., sheep Dolly) l Consequences: l Genome is cell centered, directing individual cell to develop in cell’s environment l No genome is dedicated to more than one cell l Cell learning is “in place”: Each neuron does not have an extra-celluer learner: cell learning must be fully accomplished by each cell itself while it interacts with its cell’s environment

6 2011 INNS IESNN 6 Michigan State University How to Measure Problems in AI l Time and space complexity? l High or low “level”? l Tasks that look intelligent when a machine does it? l Rational or irrational? l Handling uncertainty? l …

7 2011 INNS IESNN 7 Michigan State University Task Muddiness l Independent of problem domain l Independent of technology level l Independent of the performer: machines or animals l Can be quantified l Help us to understand why AI is difficult l Help us to see essence of intelligence l Can be used to evaluate intelligent machines l Help to appreciate human intelligence

8 2011 INNS IESNN 8 Michigan State University Task Muddiness l Agent independent l Categories only l Each category can be extended l Categories adopted to model task muddiness: l Environment l Input l Output l Internal state l Goal

9 2011 INNS IESNN 9 Michigan State University Environmental Muddiness

10 2011 INNS IESNN 10 Michigan State University Task Executor l Human agent: the human is the sole executor l Machine agent: Dual task executor l A task is given to a human l The human programs an machine agent l The agent executes

11 2011 INNS IESNN 11 Michigan State University A Partial List of Input Muddiness

12 2011 INNS IESNN 12 Michigan State University A Partial List of Other Muddiness

13 2011 INNS IESNN 13 Michigan State University 2-D Muddiness Frame Size of input Rawness of input Language translation Computer chess Visual recognition Sonar-based navigation

14 2011 INNS IESNN 14 Michigan State University Composite Muddiness m = m 1 m 2 m 3 … m n

15 2011 INNS IESNN 15 Michigan State University Autonomous Mental Development (AMD)

16 2011 INNS IESNN 16 Michigan State University Traditional Manual Development A = H(E c, T) A: agent H: human E c : Ecological condition T: Task

17 2011 INNS IESNN 17 Michigan State University New Autonomous Development A = H(E c ) Autonomous inside the skull A: agent H: human E c : Ecological condition

18 2011 INNS IESNN 18 Michigan State University Mode of Development: AA-Learning AA-learning: Automated animal-like learning Unbiased Sensors biased Sensors Effectors Closed brain World

19 2011 INNS IESNN 19 Michigan State University Existing Machine Learning Types l Supervised learning Class labels (or actions) are given in training l Unsupervised learning Class labels (or actions) are not given in training l Reinforcement learning Class labels (or actions) are not given in training but reinforcement (score) is given

20 2011 INNS IESNN 20 Michigan State University New Classification for Machine Learning l Need for considering state imposability after the task is given l 3-tuple (s, e, b): s ymbolic internal representation, e ffector, b iased sensor l State: state imposable after the task is given l Biased sensor: whether the biased sensor is used l Effector: whether the effector is imposed

21 2011 INNS IESNN 21 Michigan State University 8 Types of Machine Learning Learning type 0-7 is based on 3-tuple (s, e, b): S ymbolic internal (s=1), e ffector-imposed (e=1), b iased sensors used (b=1)

22 2011 INNS IESNN 22 Michigan State University The Developmental Approach l Enable a machine to perform autonomous mental development (AMD) l Impractical to faithfully duplicate biological AMD l Hardware: Embodiment (a robot) l Software: A developmental program l Task nonspecific l AA-learning mode, from the “birth” time through the “life” span

23 2011 INNS IESNN 23 Michigan State University Comparison of Approaches

24 2011 INNS IESNN 24 Michigan State University Developmental Program vs Traditional Learning [1] For tasks unknown at the programming time.

25 2011 INNS IESNN 25 Michigan State University Motives of Research for Development l Developmental mechanisms are easier to program: lower level, more systematic, task-independent, clearly understandable l Relieve humans from intractable programming tasks: vision, speech, language, complex behaviors, consciousness l User-friendly machines and robots: humans issue high-level commands to machines l Highly adaptive manufacturing systems (e.g., self-trainable, reconfigurable machining systems) l Help to understand human intelligence

26 2011 INNS IESNN 26 Michigan State University Task Nonspecificity l A program is not task specific means: l Open to muddy environment l Tasks are unknown at programming time l “The brain” is closed after the birth l Learn an open number of muddy tasks after birth l Avoid trivial cases: l A thermostat l A robot that does task A when temperature is high and does task B when temperature is low l A robot that does simple reinforcement learning

27 2011 INNS IESNN 27 Michigan State University 8 Requirements for Practical AMD l Eight necessary operational requirements: 1. Environmental openness: muddy environments 2. High dimensional sensing 3. Completeness in internal representation for each age group 4. Online 5. Real time speed 6. Incremental: for each fraction of second (e.g., 10-30Hz) 7. Perform while learning 8. Scale up to large memory l Existing works (other than SAIL) aimed at some, but not all. l SAIL deals with the 8 requirements altogether

28 2011 INNS IESNN 28 Michigan State University Definition of AA-Learning l A machine M conducts AA-learning if the operation mode is as follows : For t = t 0, t 1, t 2,..., the brain program f recursively updates the brain B, sensory input-ouput x and effector input-output z

29 2011 INNS IESNN 29 Michigan State University The Central Nervous System l The forebrain l The midbrain and hindbrain l The spinal cord Kandel, Schwartz and Jessell 2000

30 2011 INNS IESNN 30 Michigan State University Brodmann Areas (1909) Kandel, Schwartz and Jessell 2000

31 2011 INNS IESNN 31 Michigan State University Sensory and Motor Pathways Adapted from Kandel, Schwartz and Jessell 2000 My hypothesis: Brain has complex networks that emerge largely shaped by signal statistics (Weng IJCNN 2010)

32 2011 INNS IESNN 32 Michigan State University Multimodal Integration

33 2011 INNS IESNN 33 Michigan State University Weng IJCNN 2010 The brain has only two exposed ends to interact with the environment: Brain’s Vision System

34 2011 INNS IESNN 34 Michigan State University Triple Loops Weng IJCNN 2010

35 2011 INNS IESNN 35 Michigan State University Solving the Feature Binding Problem Weng IJCNN 2010

36 2011 INNS IESNN 36 Michigan State University Area as A Building Block Weng IJCNN 2010

37 2011 INNS IESNN 37 Michigan State University Neurons as Feature Detectors: The Lobe Component Model l Biologically motivated: l Hebbian learning l lateral inhibition l Partition the input space into c regions X = R 1 U R 2 U... U R c l Lobe component i: the principal component of the region R i Weng et al. WCCI 2006

38 2011 INNS IESNN 38 Michigan State University Different Normalizations

39 2011 INNS IESNN 39 Michigan State University Dual Optimality of CCI LCA l Spatial optimality leads to the best target: Given the number of neurons (limited resource), the target of the synaptic weight vectors minimizes the representation error based on “observation” x: l Temporal optimality leads to the best runner to the target: Given limited experience up to time t, find the best direction and step size for each t based on “observation” u = r x Weng & Luciw TAMD vol. 1, no. 1, 2009

40 2011 INNS IESNN 40 Michigan State University CCI LCA Algorithm (1)

41 2011 INNS IESNN 41 Michigan State University CCI LCA Algorithm (2)

42 2011 INNS IESNN 42 Michigan State University Plasticity Schedule t1t1 t2t2 t 2 (t)(t) r = 10000

43 2011 INNS IESNN 43 Michigan State University Natural Images

44 2011 INNS IESNN 44 Michigan State University IC from Natural Images

45 2011 INNS IESNN 45 Michigan State University Temporal Architectures

46 2011 INNS IESNN 46 Michigan State University Based on FA Ideas

47 2011 INNS IESNN 47 Michigan State University From FA to ED network l FA: s n = f(s l,a m ) s : state; a : symbol input l ED: The internal area learns: y i = f y (s l, a m ) The motor area learns: s n = f z (y i ) s : a numeric pattern of z, a sample of Z space a : a numeric pattern of x, a sample of X space y : a numeric pattern of y, a sample of Y space

48 2011 INNS IESNN 48 Michigan State University Training and Tests Luciw & Weng IJCNN 2010

49 2011 INNS IESNN 49 Michigan State University Performance

50 2011 INNS IESNN 50 Michigan State University Three Types of Information Flow l Different directions for different intents l Mixed modes are possible l There is no “if-then- else” type of switches

51 2011 INNS IESNN 51 Michigan State University For any FA there is an ED network ED: Epigenetic Developer FS: Finite Automaton Relation: An ED network can learn any FA Marvin Minsky at MIT criticized ANNs Weng IJCNN 2010

52 2011 INNS IESNN 52 Michigan State University Almost Perfect Disjoint Test Using Temporal Context Luciw, Weng & Zeng ICDL 2008

53 More Views, Better Confidence Externally sensed  Internally generated context

54 2011 INNS IESNN 54 Michigan State University For any FA there is an ED network ED: Epigenetic Developer FS: Finite Automaton Relation: An ED network can learn any FA Marvin Minsky at MIT criticized ANNs Weng IJCNN 2010

55 2011 INNS IESNN 55 Michigan State University From FA to ED network l FA: s n = f(s l,a m ) s : state; a : symbol input l ED: The internal area learns: y i = f y (s l, a m ) The motor area learns: s n = f z (y i ) s : a numeric pattern of z, a sample of Z space a : a numeric pattern of x, a sample of X space y : a numeric pattern of y, a sample of Y space

56 2011 INNS IESNN 56 Michigan State University Complex text processing l New sentence problem l Recognize new sentences from synonyms l Word sense disambiguation problem l Temporal context l Part of speech tagging problem l Label words according to part of speech l Chunking problem l Grouping sequences of words and classify them by syntactic labels Weng, Zhang, Chi & Xue ICDL 2009

57 2011 INNS IESNN 57 Michigan State University Recent Events on AMD l ICDL series: http://cogsci.ucsd.edu/~triesch/icdl/ l Workshop on Development and Learning (WDL) 2000, MSU, MI USA l 2 nd International Conf. on Development and Learning (ICDL’02): MIT, MA USA l 3 rd ICDL (2004): San Diego, CA USA l 4 th ICDL (2005): Osaka, Japan l 5 th ICDL (2006): Bloomington IN, USA l 6 th ICDL (2007): London, UK l 7 th ICDL (2008): Monterey, CA, USA l 8 th ICDL (2009): Shanghai, China l 9 th ICDL (2010): An Arbor, Michigan USA l 10th ICDL (2011), Frankfurt, Germany l EpiRob workshop series, 01, 02, 03, 04, 05, 06, 07, 08, 09, 10 l AMD Technical Committee of IEEE Computational Intelligence Society http://www.ieee-cis.org/AMD/ http://www.ieee-cis.org/AMD/ l AMD Newsletters http:///www.cse.msu.edu/amdtc/amdnl/ http:///www.cse.msu.edu/amdtc/amdnl/ l IEEE Transactions on Autonomous Mental Development http://www.ieee-cis.org/pubs/tamd/ http://www.ieee-cis.org/pubs/tamd/

58 2011 INNS IESNN 58 Michigan State University Now and Future l Now (not many people agree): l Humans start to know roughly how the brain-mind works l Future (not too far): l Systematic breakthroughs in artificial intelligence along all fronts: l Vision l Speech l Natural language l Robotics l Creative intelligence l A new industry: l New type of software industry l Cloud computing for brain-scale applications l Service robots and smart toys entering homes l Robots widely used in public environments


Download ppt "2011 INNS IESNN 1 Michigan State University A Computational Introduction to the Brain-Mind Juyang (John) Weng Michigan State University East Lansing, MI."

Similar presentations


Ads by Google