Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Artificial Intelligence and Software that Learns and Evolves DIG 3563 – Fall 13 Dr. J. Michael Moshell University of Central Florida Adapted from A Special.

Similar presentations


Presentation on theme: "1 Artificial Intelligence and Software that Learns and Evolves DIG 3563 – Fall 13 Dr. J. Michael Moshell University of Central Florida Adapted from A Special."— Presentation transcript:

1 1 Artificial Intelligence and Software that Learns and Evolves DIG 3563 – Fall 13 Dr. J. Michael Moshell University of Central Florida Adapted from A Special Presentation for Ajou University Autumn 2013 hplusmagazine.com

2 -2 - The Plan of the Lecture 0: What is a problem? What is intelligence? 1. The classical approach: logic and deduction 2. The knowledge-based approach: large databases 3. Cognitive science: models of human reasoning 4. Evolutionary Computing

3 -3 - 0: What is a Problem? "Something that is difficult to deal with." (Dictionary definition)

4 -4 - 0: What is a Problem? "Something that is difficult to deal with." (Dictionary definition) For a small child, this is a problem: Anna had $2.00. She spent $0.75 for candy. How much money does Anna have now? www.towngreendistrict.co m

5 -5 - 0: What is a Problem? "Something that is difficult to deal with." (Dictionary definition) For a small child, this is a problem: Anna had $2.00. She spent $0.75 for candy. How much money does Anna have now? For the President of the United States, this is a problem: Can we change the laws so that everyone has a job, and the economy grows in a safe, steady fashion? www.nps.gov

6 -6 - Classifying Problems Problems Well-formulated problemsother problems clear goalsmixed goals limited action spaceinfinite action space clear rulesrules are changing

7 -7 - Classifying Problems Problems Well-formulated problemsother problems clear goalsmixed goals limited action spaceinfinite action space clear rulesrules are changing EasyTractibleIntractible problemsproblemsproblems kardwell.comen.wikipedia.org artsbeat.blog.nytimes.com

8 -8 - Tractible: definition Easily handled or worked. examples: Wood is a tractible material for making furniture. OPPOSITE: intractible Titanium is an intractible material for making furniture. bizchair.com worldchair.com

9 -9 - The Traveling Salesman Problem A man must visit 50 cities. He must visit each city ONE TIME. Find the shortest path for his travel.

10 -10 - The Traveling Salesman Problem A man must visit 50 cities. He must visit each city ONE TIME. Find the shortest path for his travel. A man must visit n cities. He must visit each city ONE TIME. Find the shortest path for his travel. How long to compute?

11 -11 - The Traveling Salesman Problem A man must visit 50 cities. He must visit each city ONE TIME. Find the shortest path for his travel. A man must visit n cities. He must visit each city ONE TIME. Find the shortest path for his travel. How long to compute? time = k c n (for some constants k and c). As n gets large, time gets VERY BIG VERY FAST

12 -12 - The Traveling Salesman Problem for k=1 microsecond and c=2, 50 cities takes 313,000 hours or 35 years!

13 -13 - Classifying Problems Problems Well-formulated problemsother problems clear goalsmixed goals limited action spaceinfinite action space clear rulesrules are changing EasyTractibleIntractible In 1975: problemsproblemsproblems kardwell.comen.wikipedia.org artsbeat.blog.nytimes.com

14 -14 - IBM's Deep Blue Chess Playing Computer In 1989, IBM's computer and programming team defeated Garry Kasparov, world chess champion. It did not defeat the exponential time cost of chess. It simply made k and c small enough, and explored more futures than the human could. ibm.com en.wikipidia.org

15 -15 - Classifying Problems Problems Well-formulated problemsother problems clear goalsmixed goals limited action spaceinfinite action space clear rulesrules are changing EasyTractibleIntractible In 1990: problemsproblemsproblems kardwell.comen.wikipedia.org artsbeat.blog.nytimes.com

16 -16 - Decision Trees and Exponential Time-Cost Many problems are analyzed by building a decision tree and seeking a path to a winning node. Here, n=9 (nine options) en.wikipidia.org trim-a-tree.co.uk

17 -17 - Decision Trees and Exponential Time-Cost If each decision leads to a growing tree of other decisions, the time required to explore all the branches time = k c n and that is too long for anything but very small n. en.wikipidia.org trim-a-tree.co.uk

18 -18 - Heuristic: A plan to choose options that are 'most likely to succeed' Eliminate those branches that your heuristic function tells you are not likely to succeed. Then expand the promising ones. en.wikipidia.org trim-a-tree.co.uk

19 -19 - Heuristic: A plan to choose options that are 'most likely to succeed' A simple heuristic from chess: Do not exchange pieces if you lose more pawn-units than your opponent loses. trim-a-tree.co.uk Pawn=1 unit Knight, Bishop=3 pawns Rook=5 pawns Queen=9 pawns

20 -20 - Heuristic: A plan to choose options that are 'most likely to succeed' A simple heuristic from chess: Do not exchange pieces if you lose more pawn-units than your opponent loses. Example: Do not exchange your queen for two knights. trim-a-tree.co.uk Pawn=1 unit Knight, Bishop=3 pawns Rook=5 pawns Queen=9 pawns

21 -21 - Intelligence = Problem Solving Ability? Most people agree that an intelligent agent must be able to solve some problems (not all problems.) However, Many people feel that if you have a well-formed problem, the hard work has already been done. The BIG challenge is transforming a real-world problem into a well-formed symbolic problem. zmescience.com

22 -22 - Natural Language: a great place to find ill-formed problems Imagine a computer program that could answer questions: "Can a cat drive a car?" Computer and Program zmescience.com worldoffemale.com

23 -23 - Natural Language: a great place to find ill-formed problems Imagine a computer program that could answer questions: "Can a cat drive a car?" Computer and Program "No. A cat has no hands and cannot drive a car." zmescience.com

24 -24 - The Turing Test for Intelligence Alan Turing was a British mathematician who played a key role in World War II code-breaking and helped to develop the digital computer. He thought about intelligence and proposed a test. thocp.net

25 -25 - The Turing Test for Intelligence Is "mystery system" intelligent? Ask questions via a Teletype machine. Mystery System thocp.net Is "mystery system" a human or a machine? If you cannot accurately decide (and it's a machine) then the machine is intelligent. Mystery System

26 -26 - The Turing Test for Intelligence Has any system passed the Turing Test yet? Ask Siri... Most people quickly conclude that Siri does not yet pass the Turing Test. But it's getting better all the time... scoopertino.com www.apple.com

27 -27 - 1. The Classical (Logical) Approach to Artificial Intelligence hci.stanford.edu/~wino grad Basic concepts: 1. LOGIC is powerful enough to solve AI problems. 2. KNOWLEDGE must be represented in a formal system. 3. INFERENCE is the key mechanism to answer questions. All humans will die. John is a human  therefore, John will die.

28 -28 - 1. The Classical (Logical) Approach to Artificial Intelligence hci.stanford.edu/~wino grad Knowledge representation as a "semantic net" of related concepts en.wikipedia.org

29 -29 - 1. The Classical (Logical) Approach to Artificial Intelligence hci.stanford.edu/~wino grad Example: Terry Winograd's SHRDLU System A "Toy world" of colored blocks (simulated by computer) Questions and commands (in English): 1) Translate into formal propositions 2) Try to prove or disprove them from the known facts. 3) Change system state if possible. University of Utah

30 -30 - 1. The Classical (Logical) Approach to Artificial Intelligence hci.stanford.edu/~wino grad Example: Terry Winograd's SHRDLU System Person: Pick up a big red block Computer: OK Person: Grasp the pyramid University of Utah

31 -31 - 1. The Classical (Logical) Approach to Artificial Intelligence hci.stanford.edu/~wino grad Example: Terry Winograd's SHRDLU System Person: Pick up a big red block Computer: OK Person: Grasp the pyramid Computer: I don't understand which pyramid you mean. (because there are two of them.) University of Utah

32 -32 - 1. The Classical (Logical) Approach to Artificial Intelligence Example: Terry Winograd's SHRDLU System Watch the SHRDLU movie (3 minutes 20 seconds of it) University of Utah

33 -33 - 1. The Classical (Logical) Approach to Artificial Intelligence hci.stanford.edu/~wino grad Excitement! SHRDLU worked for Blocks World. followed by Disappointment: Most domains are MUCH harder.

34 -34 - 2. The Knowledge-Based Approach: Doug Lenat's talk at Google: Brittle Software (Lenat video: first 14 minutes)

35 -35 - 2. The Knowledge-Based Approach: Key concept: Today we have brittle (easily broken) software Danger: Power is in the hands of "smart idiots". Examples of Cyc's successes: Request: Find a picture of someone smiling  Cyc found picture of a man helping his daughter take her first step Request: Find something that could harm an airplane  Cyc located a video about an SA-7 missile

36 -36 - 2. The Knowledge-Based Approach: LARGE databases of facts. If SHRDLU's world was too small, let's build a big world of knowledge. Cyc Project – started in 1984 by Douglas Lenat Estimated effort (1986): 250,000 rules and 350 man-years of effott. Up until now: >1 million rules, and no end in sight.

37 -37 - 2. The Knowledge-Based Approach: LARGE databases of facts. If SHRDLU's world was too small, let's build a big world of knowledge. Cyc Project – started in 1984 by Douglas Lenat cYcorp distributes the OpenCyc 4.0 database (for free), with ~ 239,000 terms ~ 2,093,000 "triples" (rules) that attempt to represent human common sense.

38 -38 - 2. The Knowledge-Based Approach: cYcorp also has a private database with many more assertions and rules, in the CycL language. Example: (#$isa #$BillClinton #$UnitedStatesPresident) cycorp.org

39 -39 - Cyc: An example of the complexity University of Utah cycorp.org

40 -40 - Cyc: Method for Growing the Database * Attempt to automatically read encyclopedia articles. (enCYClopedia!) * Analyze successes & failures * Apply human "knowledge engineering" to improve rules

41 -41 - Cyc example: Terrorism Database * Analyze literature on terrorism * Predict future events. Success:  predicted anthrax mailings, 6 months before 9/11 Miss:  Predicted 1000 dolphins from Al-Qaeda to attack Hoover Dam www.usbr.gov

42 -42 - Cyc: Status and Hope for the Future Cyc will eventually become smart enough to teach itself. The results thus far: * Government sponsors basic research and terrorism database * Some commercial applications are being tried.

43 -43 - Cyc: Status and Hope for the Future Cyc will eventually become smart enough to teach itself. The results thus far: * Government sponsors basic research and terrorism database * Some commercial applications are being tried. * Many people in the Artificial Intelligence community doubt that Cyc will play a key role in successful AI Why? It's too logical. Humans are inconsistent, emotional, intuitive – they act on their FEELINGS --

44 -44 - 3. Cognitive Science – How Humans Think wikimedia commons

45 -45 - Philosophy: Example: Mind-Body Problem Is the mind part of the body? Or separate? Metaphors: "The brain is a telephone switchboard" "The brain is a computer"  Mind is software (can be changed) Brain is hardware (can be broken)  New ideas on good and evil

46 -46 - Philosophy: Example: Deductive Logic If A, and A  B, then B A: A Hyundai is a car B: Cars are made by humans so: Hyundais are made by humans

47 -47 - Philosophy: Inductive logic: If the events in class C are probable, and A is in class C, then A is probable. 90% of humans are right-handed. Jack is a human. so Jack is probably right-handed.

48 -48 - Philosophy and Intelligence If a thing is intelligent, we expect it to use deductive logic and inductive logic.

49 -49 - Psychology: wikipedia.org/mimory Definition: Study of mental functions and behaviors Example: Memory

50 -50 - Psychology: Definition: Study of mental functions and behaviors Some types of long-term memory: Procedural (how to do something) everyculture.com

51 -51 - Psychology: Definition: Study of mental functions and behaviors Some types of long-term memory: Procedural (how to do something) Topographic (where am I, where am I going) mycharlois.com

52 -52 - Psychology: Definition: Study of mental functions and behaviors Some types of long-term memory: Procedural (how to do something) Topographic (where am I, where am I going) Episodic (what happened) fleetowners.com

53 -53 - Psychology: Definition: Study of mental functions and behaviors Some types of long-term memory: Procedural (how to do something) Topographic (where am I, where am I going) Episodic (what happened) Semantic (facts, definitions, abstract knowledge) en.wikipedia.org

54 -54 - Psychology: Definition: Study of mental functions and behaviors Some types of long-term memory: Procedural (how to do something) Topographic (where am I, where am I going) Episodic (what happened) Semantic (facts, definitions, abstract knowledge) Visual (I've seen this before) bic.org

55 -55 - Psychology: Definition: Study of mental functions and behaviors Some types of long-term memory: Procedural (how to do something) Topographic (where am I, where am I going) Episodic (what happened) Semantic (facts, definitions, abstract knowledge) Visual (I've seen this before) Emotional (things I loved or hated) gofamilyperks.com

56 -56 - Psychology: If a thing is intelligent, we expect it to need (and have) most of the types of memory that humans have. Why? (back to Philosophy!)

57 -57 - Psychology: If a thing is intelligent, we expect it to need (and have) most of the types of memory that humans have. Why? Inductive logic. "Most of the intelligent creatures we have seen, had these kinds of memory".

58 -58 - Linguistics: Scientific Study of Language Key insight: Analogies carry meaning. Definition: An analogy is a comparison of two systems. If you understand system A, it can help you to understand system B. Analogy SimileMetaphor

59 -59 - Linguistics: Scientific Study of Language Key insight: Analogies carry meaning. Definition: An analogy is a comparison of two systems. If you understand system A, it can help you to understand system B. Analogy:"The motor of a car is like a horse pulling a wagon." SimileMetaphor

60 -60 - Linguistics: Scientific Study of Language Key insight: Analogies carry meaning. Definition: An analogy is a comparison of two systems. If you understand system A, it can help you to understand system B. Analogy:"His mother was a tiger!" SimileMetaphor

61 -61 - Linguistics: Scientific Study of Language Key insight: Analogies carry meaning. Science is based on analogies. Example: Bohr's "Solar system" model of the atom. ou.orgwikipediaorg

62 -62 - Linguistics: If a thing is intelligent, we expect that it will understand and use a natural language (like English or Korean). and we expect that it will make and use analogies to extend and communicate its knowledge.

63 -63 - AI and Cognitive Science: Marvin Minsky makes an analogy Minsky's theory of mind: The mind is like a complex software system. The pieces of this software system will interact in ways that are different from traditional software. They will interact like a "society".

64 -64 - AI and Cognitive Science: Marvin Minsky makes an analogy Minsky's theory of mind: A mind is a large collection of small agents They compete for control of the ‘ front office ’ (consciousness) The ‘ all or none ’ theory. You can ’ t half walk and half sit. Many of these agents are working at any time

65 -65 - Interior Grounding, Reflection and Self-Consciousness * A woman named Joan is crossing the street. A car sounds its horn.

66 -66 - Interior Grounding, reflection and Self-Consciousness * A story about a woman crossing the street. Reaction: Joan reacted quickly to that sound. Identification: She recognized it as being a sound. Characterization: She classified it as the sound of a car. Attention: She noticed certain things rather than others. Indecision: She wondered whether to cross or retreat.

67 -67 - Interior Grounding, reflection and Self-Consciousness * A story about a woman crossing the street. Reaction: Joan reacted quickly to that sound. Identification: She recognized it as being a sound. Characterization: She classified it as the sound of a car. Attention: She noticed certain things rather than others. Indecision: She wondered whether to cross or retreat. Imagining: She envisioned some possible future conditions. Selection: She selected a way to choose among options. Decision: She chose one of several alternative actions. Planning: She constructed a multi-step action-plan. Reconsideration: Later she reconsidered this choice.

68 -68 - Marvin Minsky: Interior Grounding, Reflection and Self-Consciousness * These processes can be classified something like this.. which is similar to Freud ’ s model:

69 -69 - Marvin Minsky: Interior Grounding, reflection and Self-Consciousness

70 -70 - Minsky doesn ’ t like the ‘ bottom-up ’ idea that sensations (alone) could lead to higher thought. He believes in a rich set of built-in capabilities. The details of which language, what culture, what house and street are learned by each individual.

71 -71 - Minsky's Influence "Societies of Mind" has not yet led to a working AI system.. but Minsky's early work led to the study of Neural Nets (our next topic)

72 -72 - 5. Neural Nets, Perception and Learning

73 -73 - 4. Artificial Evolution Nature "learns" by creating new species. Can we model that process, to solve problems? bio100.nicerweb.net

74 -74 - Evolutionary Computing: * Reviewing Genetics Sexual reproduction has a big payoff. What is it? ( In other words: why are males worth having?) Observation: bacteria and viruses without SR have evolved several mechanism for swapping DNA. It ’ s almost as if the fundamental underlying metaphor for life is a flea market. www.ryctx.org

75 -75 - Evolutionary Computing: * Genetics Reviewed KNOWN BEFORE DNA was discovered: The genome is a (very) long sequence of Genes Each gene controls the production of one kind of protein Proteins are catalysts for chemical reactions as well as the ‘ structural steel ’ of living organisms. A GENE represents a finite alphabet of choices. The various versions of a gene are called alleles. If there are 10 ways to make collagen, there would be 10 alleles for the collagen gene.

76 -76 - Genotype and Phenotype Genotype: your collection of genes Phenotype: your ‘ rendering ’ – your actual body, as built. Genes, encoded in DNA, are organized into chromosomes Individual humans have 23 pairs of chromosomes When reproducing, each parent randomly contributes one of the two chromosomes to the child.

77 -77 - Genotype and Phenotype MomDad 1X X 2X X 3X X...23 X X

78 -78 - Genotype and Phenotype MomDad 1X X 2X X 3X X...23 X X

79 -79 - Genotype and Phenotype MomDad 1X X 2X X 3X X...23 X X

80 -80 - Genotype and Phenotype MomDad 1X X 2X X 3X X...23 X X.. it ’ s a GIRL! A given pair of parents can produce 2 23 ~= 8 million different genetic combinations.

81 -81 - Why does this system pay such big dividends? The gene pool is a toolkit of variations. Consider melanin. Assume variations from black to brown in various versions of the melanin gene. Your tribe moves from Africa to Europe. Your random genome remix produces kids of various shades. The ones with lighter skin get more vitamin D and thrive. They have more kids. The light-skin gene increases in the gene pool. Feedback loop.

82 -82 - Why does this system pay such big dividends? The gene pool is a toolkit of variations. Consider melanin. Assume variations from black to brown in various versions of the melanin gene. Your tribe moves from Africa to Europe. Your random genome remix produces kids of various shades. The ones with lighter skin get more vitamin D and thrive. They have more kids. The light-skin gene increases in the gene pool. Feedback loop. NOTE: You didn ’ t have to INVENT the variation (mutation). You had it stored away in your toolkit (genome). Mutation (creation of new alleles or genes) is MUCH slower than selection among existing alleles. You need BOTH mechanisms.

83 -83 - Mutation – the big Disaster/Opportunity Mutations are rare and usually fatal A copying error occurs in a chromosome - some DNA is duplicated - some DNA is deleted - one codon (and its amino acid) replaces another Some mutations are beneficial but most are fatal or neutral (now) A slightly different kind of hemoglobin might not kill you but might turn out to be BETTER, against some parasite that attacks your great great great.... grandchildren

84 -84 - Diversity yields robustness The environment produces an infinite suite of challenges. A rich gene pool provides instant options to try. A narrow gene pool is a ticket to extinction (florida panthers.) Hybrid vigor is a concept that every farmer knows. Cross Hereford and Angus cows; calves grow faster.

85 -85 - Diversity yields robustness The environment produces an infinite suite of challenges. A rich gene pool provides instant options to try. A narrow gene pool is a ticket to extinction (florida panthers.) Hybrid vigor is a concept that every farmer knows. Cross Hereford and Angus cows; calves grow faster. It ’ s like NAFTA or the European Union. Win-win really is possible! Your kids will survive better if your partner ’ s tool-set complements rather than replicates your own.

86 -86 - Unnatural selection To build a “ learning system ” we need three things: -a genotype (a coded representation) -a phenotype (a rendering into a ‘ real world ’ of competition) -a fitness function (something to measure and kill the losers)

87 -87 - Unnatural selection To build a “ learning system ” we need three things: -a genotype (a coded representation) -a phenotype (a rendering into a ‘ real world ’ of competition) -a fitness function (something to measure and kill the losers) Is a self-replicating robot without a genome impossible? That is not proven. But all examples thus far are trivial. (Crystal growth) www.dpchallenge.com

88 -88 - A Genotype for Mini-Robots Karl Sims decided to use a graph-theory genome It is applied twice: once for body, once for nervous system A random pool of 300 genomes is built -they are pre-selected by removing: - creatures with more than N body parts - creatures whose body parts interpenetrate (share space) - Rules of the universe are established; e. g. gravity, a floor - Goal (fitness function) is set: e. g. radius crawled in 1 minute. -Run simulation. Keep best 1/5 of population (60 individuals) Re-mix genes to replace the 240 who died. Run the simulation again.

89 -89 - A Genotype for Mini-Robots Some of the goals: - radial distance traveled - linear distance traveled - distance swum (or flown) through fluid medium - speed of approach toward a moving target point - competition to capture a shared object

90 -90 - A Genotype for Mini-Robots Some of the goals: - radial distance traveled - linear distance traveled - distance swum (or flown) through fluid medium - speed of approach toward a moving target point - competition to capture a shared object Competitive events: how do you pair them up? - n x n takes n 2 time, and is too slow (each sim is slow!) - pairwise often means playing against an idiot. - n vs. best-of-last-round seemed to work well.

91 -91 - A Genotype for Mini-Robots Some of the goals: - radial distance traveled - linear distance traveled - distance swum (or flown) through fluid medium - speed of approach toward a moving target point - competition to capture a shared object Competitive events: how do you pair them up? - n x n takes n 2 time, and is too slow (each sim is slow!) - pairwise often means playing against an idiot. - n vs. best-of-last-round seemed to work well. One-species versus two-species (breeding populations)

92 -92 - NOW watch the movie at http://www.youtube.com/watch?v=JBgG_VSP7f8 www.dpchallenge.com

93 -93 - A Genotype for Mini-Robots So... how was this done? NODE and LINK (The names are just to help us think.) Example 1: From a segment, link to two other segments. Repeat any number of times, recursively. Example 1:

94 -94 - A Genotype for Mini-Robots So... how was this done? NODE and LINK (The names are just to help us think.) Example 2: From a body segment, link to one other body seg. and two leg segments. From a leg segment link once to another leg segment. Example 2:

95 -95 - A Genotype for Mini-Robots So... how was this done? NODE and LINK (The names are just to help us think.) Example 3: From a body, link to a head & four limbs. From a limb, link to another limb. www.dpchallenge.com Example 3:

96 -96 - Brains and bodies Each sensor is contained in a specific body part Sensors measure joint angles, forces, properties of the world The brain is a network of neurons – (but not like real ones) Neurons ’ functions include: sum, product, sum-threshold, greater than,.... sin, cos, log, integrate, differentiate,... smooth, memory, oscillate-wave, oscillate-sawtooth

97 -97 - Neurons in Segments P0, P1 are photosensors C0 and Q0 are contact sensors E0 and E1 are effectors (joint angle drivers) The connections are evolved, not reasoned out. (There is a graph genome for the neurons, too.)

98 -98 - Neurons in Segments P0, P1 are photosensors C0 and Q0 are contact sensors E0 and E1 are effectors (joint angle drivers) The connections are evolved, not reasoned out. (There is a graph genome for the neurons, too.)

99 -99 - Neurons in Segments A single shared neuron group is also provided. ( “ where ” is it? Unspecified) This capability allows for coordinated control.

100 -100 - Neurons in Segments A single shared neuron group is also provided. ( “ where ” is it? Unspecified) This capability allows for coordinated control. The saw and wav oscillators are key elements.

101 -101 - What Changes in each Generation? NOTE: The system mixes sexual reproduction with mutation in an un-biological way: mutation occurs in every generation. MUTATION 1.Internal parameters (weights, oscillation frequencies) are randomly altered. Small alterations more likely than big ones. 2.A new random node is added to graph. (May not connect; will be discarded if not.) 3.New random connections are added, existing ones are removed. 4.Unconnected elements are garbage-collected. Outside (morphology) graphs are altered, then inside (neuro) ones.

102 -102 - What Changes in each Generation? MATING the GRAPHS a. Crossover operation. A subset of parent 2 is inserted to replace a subset of parent 1

103 -103 - What Changes in each Generation? MATING the GRAPHS a. Crossover operation. A subset of parent 2 is inserted to replace a subset of parent 1

104 -104 - What Changes in each Generation? MATING the GRAPHS b. Grafting operation. Two parents are joined together (each loses one node)

105 -105 - Results - Interbreeding populations often converge to uniformity, but - Successive runs often produce totally different results. - Swimming produced: - paddles - tail wagglers - specialized scullers - lots of flippers - water snakes

106 -106 - Results - Interbreeding populations often converge to uniformity, but - Successive runs often produce totally different results. - Walking produced: - corner-walkers - rocking blocks - inchworms - legs - hoppers Light-following worked in walking and swimming environments.

107 -107 - What happened next? - not much (at least, nothing so spectacular as Sims ’ creatures.) Why? - The leap from simple goal-seeking motor activity ( “ tropisms ” ) to interesting perception and cognition is verrrrrry looooong. - Folks like Brooks and Minsky ’ s successors are trying to bridge the gap. - Fundamental insights are still needed.


Download ppt "1 Artificial Intelligence and Software that Learns and Evolves DIG 3563 – Fall 13 Dr. J. Michael Moshell University of Central Florida Adapted from A Special."

Similar presentations


Ads by Google