Presentation on theme: "History of Artificial Intelligence Aristotle (384-322 B.C.) set out to explain and codify certain styles of deductive reasoning that he called syllogisms."— Presentation transcript:
History of Artificial Intelligence Aristotle (384-322 B.C.) set out to explain and codify certain styles of deductive reasoning that he called syllogisms. Leibniz (1646-1716) dreamed of a universal algebra by which all knowledge, including moral and metaphysical truths, can some day be brought within a single deductive system. George Bool in his book The laws of Thought  developed the Boolean algebra which stands as the foundation of propositional logic. Gottlieb Frege  proposed a notational system for mechanical reasoning and in doing so invented predicate calculus. He called his language Begriffsschrift which can be translated as concept writing.
The predicate calculus and several of its variants constitute the foundation of knowledge representation in AI. Digital computers first developed in 1940s and 1950s. Warren McCulloch and Walter Pitts  theorized about the relationships between simple computing elements and biological neurons. They show that it was possible to compute any computable function by networks of logical gates. Alan Turing publishes Computer Machinery and Intelligence where he proposed the so called Turing Test . In the 50s, papers describing the first computer programs that could play chess: [Shannon,1950], [Newell, Shaw & Simon, 1958], checkers: [Samuel, 1959 & 1967] and prove theorems in plane geometry: [Gelernter, 1959]. History of Artificial Intelligence
Dartmouth conference on Artificial Intelligence (1956): Allen Newell, Herbert Simon, John McCarthy and Marvin Minsky. Physical conjeture every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. First time the name artificial Intelligence was used. Frank Rosenblatt  explored the use of networks, called perceptrons, or neuronlike elements for learning and for pattern recognition. Much of the early AI work (1960s and 1970s) explored a variety of problem representations, search techniques, and general heuristics. They employed these techniques in programs that could solve simple puzzles, play games, and retrieve information but failed to tackle real world problems. History of Artificial Intelligence
Late 70s and early 80s saw the development of the so called expert systems: more powerful programs required much more built-in knowledge about the domain of application. DENDRAL a system for predicting the structure of organic molecules given their chimical formula and mass spectrogram analysis [Feigenbaum, Buchanan & Lederberg, 1971]. The first expert system that demonstrated the importance of large amounts of domain-specific knowledge. MYCIN A system developed at Stanford Institute to perform medical diagnosis [Shortliffe, 1976], [Miller, Pople & Myers, 1982]. Prospector A system capable to evaluate potential ore deposits [Campbell et al. 1982]. History of Artificial Intelligence
Human intelligence encompasses many abilities, including: The ability to perceive analyse a visual scene. The ability to understand and generate language. Terry Winograds Natural Language Processing program. LUNAR [Woods, 1973] These specific topics have received much attention. Neural networks resumed in the 1980s. Networks of nonlinear elements with adjustable-strength interconnections are now recognized as an important class of nonlinear modelling tools. Game playing programs have achieved important success. On May 11, 1997, an IBM program called DEEP BLUE beat the reining chess champion, Garry Kasparov, by 3.5 to 2.5 in a six- game match. History of Artificial Intelligence
Looking at the future [Nilson, 1998]: New emphasis in integrated autonomous systems: robots and softbots [Etzioni & Weld, 1994] and software agents that roam the internet finding information they assume will interest to their users. The need to improve the location of useful information on the Net through the use of semantic architectures. History of Artificial Intelligence
DARPA Grand Challenge 2004 1 A mandate from the US Congress states that at least a third of all military vehicles be autonomous by the year 2015. Unhappy with the rate of technological progress, DARPA (the US military funding agency) decided to hold a competition for a robot that could tackle a 150 mile dessert course. A one million dollar prize was offered for the winner. The exact course was kept secret until about three hours before the start of the race. DARPA gave each team a CD with the latitude and longitude of about 2000 waypoints that could be located through the robots onboard GPS equipment. The course included utility roads, elevation changes, and some drops. Once each robot left the starting line, it was on its own; no human intervention was allowed. ________________________________________ 1 This information was taken from Chris Thornton's Web page here.here
DARPA Grand Challenge 2004 Spectators and media were kept off the official course, with the help of about 80 U.S. Marines and hundreds of volunteers. The Bureau of Land Management worked for more than a year evaluating possible environmental damage the robots might do. Darpa positioned 10 tow trucks along the race course in anticipation of some robot vehicles needing help.
DARPA Grand Challenge 2004 (Results) What actually happened The robots had 10 hours to complete the 150 mile course but none of them made it. In fact only four of the entrants got more than 8 miles before crashing or suffering crippling technical problems. Many got no further than the starting area. The prize went unclaimed... DARPA's own summary of the performance of each of the teams includes the following statements. Vehicle 22 - Red Team - At mile 7.4. Vehicle went off course, got caught on an obstacle and rubber on the front wheels caught fire, which was quickly extinguished. Vehicle has been disabled.
DARPA Grand Challenge 2004 ( Chris Thornton ) DARPA fiasco is a kind of re-run of the 1970s The failure of the robots competing in the Grand Challenge is a powerful (and embarrassing) reminder of the lesson that AI researchers started to learn in the early 1970s. Real-world tasks are much harder than we expect and, in general, totally beyond existing levels of competence.