email@example.com – (314) 232-5042 On Chasing the Ether of Intelligent Systems Dr. James Guffey 25 Oct 2006 INCOSE Meeting – St. Louis MSG06-081332-001.ppt
MSG06-081332-002.ppt Name: Dr. James A. Guffey Senior Principal Engineer Key Technical Field: Technology Integration; Autonomous Technologies Phone: (314) 232-5042 E-mail: firstname.lastname@example.org Dr. Guffey has been at Boeing/McDonnell Douglas for 2 dozen and 3 years and has been heavily involved in the definition, implementation and management of a large number of avionics research and development programs for the Army, Air Force, Navy and DARPA, covering a wide variety of technology areas. In the technology area of Autonomy and Intelligent Decision Aiding, Dr.Guffey initiated the first in-house investigations into the use and applicability of Artificial Intelligence Technology at then MCAIR in 1984 and lead the proposal team that won the DARPA Pilots Associate Program in the mid 80s. He was the Technical/Deputy Program Manager of the PA program, which built an AI-based system to assist a fighter pilot in performing future advanced combat missions. The software architecture for PA was the basis for the Rotor Craft Pilots Associate system, which in turn was the genus of the Unmanned Combat Air Vehicle and some elements of the Future Combat System program for unmanned vehicles. He was a member of the Integration team for the Unmanned Systems organization when it was formed in 2001 and responsible for the development of a definition of potential future markets that could capitalize on advanced automation and autonomous applications for platforms and systems. In 2004 Jim acted as the Boeing management/technical liaison to the Carnegie Mellon University (CMU) RedTeam for the DARPA Grand Challenge autonomous vehicle race in the deserts of California and Nevada. Part of that role was to be the broker to reach back into Boeing for technical help requested by the onsite Boeing employees at CMU to make the vehicles successful.
MSG06-081332-011.ppt It Doesnt Help That the Definition Keeps Changing At Every Step of the Way, on the Quest for Hard AI Systems, Whatever We Make Work, We Immediately Declare That It Is Not Intelligent Increasing Capabilities NoIntelligent ?? System A System D System C System B NoIntelligent ?? NoIntelligent ?? NoIntelligent ?? Why? Because Once We See What We Did to Make It Work We Realize That It Is Only Doing What We Told It to Do! Why? Because Once We See What We Did to Make It Work We Realize That It Is Only Doing What We Told It to Do! The Creator Knows It Isnt Smart, Even If the Audience Thinks It Is! (So maybe We Should Get Rid of the Creators After They Create?)