Presentation is loading. Please wait.

Presentation is loading. Please wait.

Robotics CSPP 56553 Artificial Intelligence March 10, 2004.

Similar presentations


Presentation on theme: "Robotics CSPP 56553 Artificial Intelligence March 10, 2004."— Presentation transcript:

1 Robotics CSPP 56553 Artificial Intelligence March 10, 2004

2 Roadmap Robotics is AI-complete –Integration of many AI techniques Classic AI –Search in configuration space (Ultra) Modern AI –Subsumption architecture Multi-level control Conclusion

3 Mobile Robots

4 Robotics is AI-complete Robotics integrates many AI tasks –Perception Vision, sound, haptics –Reasoning Search, route planning, action planning –Learning Recognition of objects/locations Exploration

5 Sensors and Effectors Robotics interact with real world Need direct sensing for –Distance to objects – range finding/sonar/GPS –Recognize objects – vision –Self-sensing – proprioception: pose/position Need effectors to –Move self in world: locomotion: wheels, legs –Move other things in world: manipulators Joints, arms: Complex many degrees of freedom

6 Real World Complexity Real world is hardest environment –Partially observable, multiagent, stochastic Problems: –Localization and mapping Where things are What routes are possible Where robot is –Sensors may be noisy; Effectors are imperfect –Don’t necessarily go where intend –Solved in probabilistic framework

7 Navigation

8 Application: Configuration Space Problem: Robot navigation –Move robot between two objects without changing orientation –Possible? Complex search space: boundary tests, etc

9 Configuration Space Basic problem: infinite states! Convert to finite state space. Cell decomposition: –divide up space into simple cells, each of which can be traversed “easily" (e.g., convex) Skeletonization: –Identify finite number of easily connected points/lines that form a graph such that any two points are connected by a path on the graph

10 Skeletonization Example First step: Problem transformation –Model robot as point –Model obstacles by combining their perimeter + path of robot around it –“Configuration Space”: simpler search

11 Navigation

12

13 Navigation as Simple Search Replace funny robot shape in field of funny shaped obstacles with –Point robot in field of configuration shapes All movement is: –Start to vertex, vertex to vertex, or vertex to goal Search: Start, vertices, goal, & connections A* search yields efficient least cost path

14 Online Search Offline search: –Think a lot, then act once Online search: –Think a little, act, look, think,.. –Necessary for exploration, (semi)dynamic env –Components: Actions, step-cost, goal test –Compare cost to optimal if env known Competitive ratio (possibly infinite)

15 Online Search Agents Exploration: –Perform action in state -> record result –Search locally Why? DFS? BFS? Backtracking requires reversibility –Strategy: Hill-climb Use memory: if stuck, try apparent best neighbor Unexplored state: assume closest –Encourages exploration

16 Acting without Modeling Goal: Move through terrain Problem I: Don’t know what terrain is like –No model! –E.g. rover on Mars Problem II: Motion planning is complex –Too hard to model Solution: Reactive control

17 Reactive Control Example Hexapod robot in rough terrain Sensors inadequate for full path planning 2 DOF*6 legs: kinematics, plan intractable

18 Model-free Direct Control No environmental model Control law: –Each leg cycles: on ground; in air –Coordinate so that 3 legs on ground (opposing) Retain balance Simple, works on flat terrain

19 Handling Rugged Terrain Problem: Obstacles –Block leg’s forward motion Solution: Add control rule –If blocked, lift higher and repeat –Implementable as FSM Reflex agent with state

20 FSM Reflex Controller S2S1 S3 S4 Push back Lift up Stuck? Move Forward Retract, lift higher no yes Set Down

21 Emergent Behavior Reactive controller walks robustly –Model-free; no search/planning –Depends on feedback from the environment Behavior emerges from interaction –Simple software + complex environment Controller can be learned –Reinforcement learning

22 Subsumption Architecture Assembles reactive controllers from FSMs –Test and condition on sensor variables –Arcs tagged with messages; sent when traversed Messages go to effectors or other FSMs –Clocks control time to traverse arc- AFSM –E.g. previous example Reacts to contingencies between robot and env Synchronize, merge outputs from AFSMs

23 Subsumption Architecture Composing controllers from composition of AFSM –Bottom up design Single to multiple legs, to obstacle avoidance –Avoids complexity and brittleness No need to model drift, sensor error, effector error No need to model full motion

24 Subsumption Problems Relies on raw sensor data –Sensitive to failure, limited integration –Typically restricted to local tasks Hard to change task –Emergent behavior – not specified plan Hard to understand –Interactions of multiple AFSMs complex

25 Solution Hybrid approach –Integrates classic and modern AI 3 layer architecture –Base reactive layer: low-level control Fast sensor action loop –Executive (glue) layer Sequence actions for reactive layer –Deliberate layer Generates global solutions to complex tasks with planning Model based: pre-coded and/or learned Slower Some variant appears in most modern robots

26 Conclusion Robotics as AI microcosm –Back to PEAS model Performance measure, environment, actuators, sensors –Robots as agents act in full complex real world Tasks, rely on actuators and sensing of environment –Exploits perceptions, learning, and reasoning –Integrates classic AI search, representation with modern learning, robustness, real-world focus


Download ppt "Robotics CSPP 56553 Artificial Intelligence March 10, 2004."

Similar presentations


Ads by Google