Robot Control. Open Loop Control Sends commands to make a robot preform some movement without attempting to check if it is doing things properly. For.

Slides:



Advertisements
Similar presentations
Artificial Intelligence 12. Two Layer ANNs
Advertisements

Control Systems The elements of a control system
PID Controllers and PID tuning
Introductory Control Theory I400/B659: Intelligent robotics Kris Hauser.
Fuzzy Logic and Robot Control Mundhenk and Itti, 2007.
Design with Root Locus Lecture 9.
Add and Use a Sensor & Autonomous For FIRST Robotics
Sonar and Localization LMICSE Workshop June , 2005 Alma College.
Navigating the BOE-BOT
Using the NXT Light Sensor. 2 Connect One Light Sensor – 1 From My Files use Left / Right NXT buttons and get to View menu and push Orange button. From.
Autonomy using Encoders Intro to Robotics. Goal Our new task is to navigate a labyrinth. But this time we will NOT use motor commands in conjunction with.
Artificial Intelligence in Game Design Introduction to Learning.
Introduction to Control: How Its Done In Robotics R. Lindeke, Ph. D. ME 4135.
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
Experiences with an Architecture for Intelligent Reactive Agents By R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P Miller, Marc.
Autonomous Mobile Robots CPE 470/670 Lecture 6 Instructor: Monica Nicolescu.
SSS: A Hybrid Architecture Applied to Robot Navigation Jonathan H. Connell IBM T.J. Watson Research Center Review Paper By Kai Xu What’s this?
Autonomous Mobile Robots CPE 470/670 Lecture 8 Instructor: Monica Nicolescu.
Integrating POMDP and RL for a Two Layer Simulated Robot Architecture Presented by Alp Sardağ.
On Three-Layer Architecture Erann Gat Jet Propulsion Laboratory California Institute of Technology Presentation by: Ekkasit Tiamkaew Date: 09/09/04.
Patent Liability Analysis Andrew Loveless. Potential Patent Infringement Autonomous obstacle avoidance 7,587,260 – Autonomous navigation system and method.
Fuzzy Logic Mark Strohmaier CSE 335/435.
Unit 3a Industrial Control Systems
Fuzzy Logic. Priyaranga Koswatta Mundhenk and Itti, 2007.
June 12, 2001 Jeong-Su Han An Autonomous Vehicle for People with Motor Disabilities by G. Bourhis, O.Horn, O.Habert and A. Pruski Paper Review.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
Proportional/Integral/Derivative Control
Applied Control Systems Robotics & Robotic Control
Behavior Based Robotics: A Wall Following Behavior Arun Mahendra - Dept. of Math, Physics & Engineering, Tarleton State University Mentor: Dr. Mircea Agapie.
Programming Concepts Part B Ping Hsu. Functions A function is a way to organize the program so that: – frequently used sets of instructions or – a set.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
CS 478: Microcontroller Systems University of Wisconsin-Eau Claire Dan Ernst Feedback Control.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 9: Ways of speeding up the learning and preventing overfitting Geoffrey Hinton.
Autonomy using Encoders Intro to Robotics. Autonomy/Encoders Forward for Distance In this unit, you will learn to use the encoders to control the distance.
Low Level Control. Control System Components The main components of a control system are The plant, or the process that is being controlled The controller,
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Artificial Intelligence in Game Design Complex Steering Behaviors and Combining Behaviors.
Control Theory in Industry, Robotics and Infrastructure
Design Realization lecture 22
PID CONTROLLERS By Harshal Inamdar.
CSCI1600: Embedded and Real Time Software Lecture 12: Modeling V: Control Systems and Feedback Steven Reiss, Fall 2015.
Control systems KON-C2004 Mechatronics Basics Tapio Lantela, Nov 5th, 2015.
Lecture 25: Implementation Complicating factors Control design without a model Implementation of control algorithms ME 431, Lecture 25.
Artificial Intelligence in Game Design Lecture 8: Complex Steering Behaviors and Combining Behaviors.
Artificial Intelligence in Game Design Lecture 20: Hill Climbing and N-Grams.
Get your software working before putting it on the robot!
Fundamentals of Computer Animation
Robotics and Programming Finch Robot. What is Programming?  Programming is the process of developing a command or series of commands for a computer to.
How Do You Make a Program Wait?
Author: Nurul Azyyati Sabri
PID Control for Embedded Systems
Optical PLL for homodyne detection
Understanding Communication with a Robot? Activity (60 minutes)
Automation as the Subject of Mechanical Engineer’s interest
Chapter 21 More About Tests.
PID Controllers Jordan smallwood.
Artificial Intelligence Lecture No. 5
Control Loops Nick Schatz FRC 3184.
CSCI1600: Embedded and Real Time Software
Neural Networks A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Autonomy using Encoders
CIS 488/588 Bruce R. Maxim UM-Dearborn
Better Line Following with PID
An Introduction to VEX IQ Programming with Modkit
CIS 488/588 Bruce R. Maxim UM-Dearborn
An Introduction to VEX IQ Programming with Modkit
Maze Challenge Maze Challenge activity > TeachEngineering.org
Artificial Intelligence 12. Two Layer ANNs
Getting started with LEGO EV3 Mindstorms software
Presentation transcript:

Robot Control

Open Loop Control Sends commands to make a robot preform some movement without attempting to check if it is doing things properly. For example a rover on Mars being told by a human operator to go forward 1 meter. If the wheels get dirt in them or hit a rock the robot won’t move straight.

Feedback or Closed Loop Control Feedback control is a means of getting a system (a robot) to achieve and maintain a desired state, usually called the set point, by continuously comparing its current state with its desired state. Feedback refers to the information that is sent back, literally “fed back,” into the system’s controller.

Goals The desired state of the system, also called the goal state, is where the system wants to be. To reach a goal state, the robot needs maintenance goals. This will require ongoing active work on the part of the system. Keeping a biped robot balanced, for example, is a maintenance goal.

Error The difference between the current state and the goal state of a system is called the error. The control system is designed to minimize that error. Feedback control calculates the error in order to help the robot reach the goal. When the error is zero (or small enough), the goal state is reached.

Feedback System Feedback takes the sensor error and converts it into a command.

Feedback Example As a real world example of error and feedback, let’s consider the game that’s sometimes called “hot and cold,” in which you have to find or guess some hidden object, and your friends help you by saying things like “You’re getting warmer, hotter, colder, freezing” and so on.

Imagine a overly simplified version of the same game, in which your friends tell you only “You are there, you win!” or “Nope, you are not there.” In that case, what they are telling you is only if the error is zero or non-zero (if you are at the goal state or not). This is not very much information, since it does not help you figure out which way to go in order to get closer to the goal, to minimize the error.

In the normal version of the game, when you are told “hot” or “cold,” you are being given the direction of the error, which allows for minimizing the error and getting closer to the goal. When the system knows how far off it is from the goal, it knows the magnitude of error, the distance to the goal state. In the “hot and cold” game, the gradations of freezing, chilled, cool, warm, and so on are used to indicate the distance from (or closeness to) the goal object.

An Example of a Feedback Control Robot How would you write a controller for a wall-following robot using feedback control?

The first step is to consider the goal of the task. In wall-following, the goal state is a particular distance, or range of distances, from a wall. This is a maintenance goal, since wall- following involves keeping that distance over time.

Given the goal, it is simple to work out the error. In the case of wall-following, the error is the difference between the desired distance from the wall and the actual distance at any point in time. Whenever the robot is at the desired distance (or range of distances), it is in the goal state. Otherwise, it is not.

Sensors What sensor(s) would you use for a wall-following robot and what information would they provide?

The rate with which new distance-to-wall is sensed and computed, is called the sampling rate. 3kg 3kg ctiontime ctiontime

Whatever sensor is used, assume that it provides the information to compute distance-to-wall. Consider the following controller: If distance-to-wall is in the desired range, – keep moving forward. If distance-to-wall is larger than desired, – turn toward the wall, – else turn away from the wall.

Given the previous controller algorithm, the robot’s behavior will keep it moving and wiggle back and forth as it moves along. How much switching back and forth will it do? That depends on two parameters: – How often the error is computed. – How much of a correction (turn) is made each time.

Consider the following controller: If distance-to-wall is exactly as desired, – keep going. If distance-to-wall is larger than desired, – turn by 45 degrees toward the wall, – else turn by 45 degrees away from the wall.

It oscillates a great deal and rarely if ever reaches the desired distance before getting too close to or too far from the wall. In general, the behavior of any simple feedback system oscillates around the desired state. Therefore, the robot oscillates around the desired distance from the wall; most of the time it is either too close or too far away.

How can we decrease this oscillation? There are a few things we can do: The first is to compute the error often, so the robot can turn often rather than rarely. Another is to adjust the turning angle so the robot turns by small rather than large angles. Still another is to find just the right range of distances that defines the robot’s goal.

Damping refers to the process of systematically decreasing oscillations. A system is properly damped if it does not oscillate out of control. How the motor responds to speed commands plays a key part in control, wear and tear on the gears. Actuator uncertainty makes it impossible for a robot (or a human, for that matter) to know the exact outcome of an action ahead of time, even for a simple action such as “Go forward three feet.”

The three most used types feedback control are: Proportional control (P) Proportional Derivative control (PD) Proportional Integral Derivative control (PID)

Proportional Control The basic idea of proportional control is to have the system respond in proportion to the error, using both the direction and the magnitude of the error. It would use distance-to-wall as a parameter to determine the angle and distance and/or speed with which the robot would turn. In control theory, the parameters that determine the magnitude of the system’s response are called gains. Damping refers to the process of systematically decreasing oscillations. A system is properly damped if it does not oscillate out of control.

Derivative Control momentum = mass ∗ velocity You can control momentum by controlling the velocity of the system. The controller corrects for the momentum of the system as it approaches the desired state. A derivative controller would slow the robot down and decrease the angle of its turning as its distance from the wall gets closer to the desired state, the optimal distance to the wall.

Integral Control The system integrates (sums up) these incremental errors over time, and once they reach some predetermined threshold (once the cumulative error gets large enough), the system does something to compensate/correct. Such as a robot lawn mower with an error when turning to cut sections of grass. With a system to detect the error, it can compensate for it over time.

Proportional Derivative control (PD) control is extremely useful and applied in most industrial plants for process control. Proportional Integral Derivative control (PID) control is a combination of proportional P, integral I, and derivative D control terms.

What Can the Robot Represent? There are numerous aspects of the world that a robot can represent and model, and numerous ways in which it can do it. The robot can represent information about: Self: Battery life, physical limits Environment: navigable spaces, structures Objects, people, other robots: detectable things in the world Actions: outcomes of specific actions in the environment Task: what needs to be done

Control Architectures The job of the controller is to provide the brains for the robot. So the robot can be autonomous and achieve its goals. The robot can sense multiple things at once. A controller will help make decisions what the robot should be observing.

Regardless of which language is used to program a robot, what matters is the control architecture used to implement the controller, because not all architectures are the same.

Deliberative Control Planning is the process of looking ahead at the outcomes of the possible actions, and searching for the sequence of actions that will reach the desired goal. Search is an inherent part of planning. It involves looking through the available representation “in search of” the goal state.

Deliberative, planner-based architectures involve three steps that need to be performed in sequence: – 1. Sensing (S) – 2. Planning (P) – 3. Acting (A), executing the plan.

Reactive Control You can think of reactive rules as being similar to reflexes, innate responses that do not involve any thinking. The best way to keep a reactive system simple and straightforward is to have each unique situation (state) that can be detected by the robot’s sensors trigger only one unique action of the robot. It must support the ability of parallelism, to handle checking multiple sensors.

Hybrid Control Hybrid control involves the combination of reactive and deliberative control within a single robot control system by having: Planner Middle layer that links the layers together (by issuing commands). Reactive layer Uses both Open and Closed execution

Behavior-Based Control Behavior-based control (BBC) involves the use of "behaviors" as modules for control. Behaviors achieve and/or maintain complex goals. A homing behavior achieves the goal of getting the robot to the home location. A wall- following behavior maintains the goal of following a wall. These take time to execute and are not instantaneous. Constantly monitoring the sensors and other behavior statuses.