Presentation is loading. Please wait.

Presentation is loading. Please wait.

POLI di MI tecnicolano VISION-AUGMENTED INERTIAL NAVIGATION BY SENSOR FUSION FOR AN AUTONOMOUS ROTORCRAFT VEHICLE C.L. Bottasso, D. Leonello Politecnico.

Similar presentations


Presentation on theme: "POLI di MI tecnicolano VISION-AUGMENTED INERTIAL NAVIGATION BY SENSOR FUSION FOR AN AUTONOMOUS ROTORCRAFT VEHICLE C.L. Bottasso, D. Leonello Politecnico."— Presentation transcript:

1 POLI di MI tecnicolano VISION-AUGMENTED INERTIAL NAVIGATION BY SENSOR FUSION FOR AN AUTONOMOUS ROTORCRAFT VEHICLE C.L. Bottasso, D. Leonello Politecnico di Milano AHS International Specialists' Meeting on Unmanned Rotorcraft Phoenix, AZ, January 20-22, 2009

2 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Outline Introduction and motivation Inertial navigation by measurement fusion Vision-augmented inertial navigation - Stereo projection and vision-based position sensors - Vision-based motion sensors - Outlier rejection Results and applications Conclusions and outlook

3 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Rotorcraft UAVs at PoliMI navigation control Low-cost platform for development and testing of navigation and control strategies (including vision, flight envelope protection, etc.) Vehicles: off-the-shelf hobby helicopters On-board control hardware based on PC-104 standard everything is developedin-house Bottom-up approach, everything is developed in-house: - Inertial Navigation System (this paper) - Guidance and Control algorithms (AHS UAV `07: C.L. Bottasso et al., path planning by motion primitives, adaptive flight control laws) - Linux-based real-time OS - Flight simulators - System identification (estimation of inertia, estimation of aerodynamic model parameters from flight test data) - Etc. etc.

4 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA UAV Control Architecture Target Obstacles Hierarchical three-layer control architecture Hierarchical three-layer control architecture (Gat 1998): Strategic layer: assign mission objectives (typically relegated to a human operator) Tactical layer: generate vehicle guidance information, based on input from strategic layer and ambient mapping information Reflexive layer: track trajectory generated by tactical layer, control, stabilize and regulate vehicle Sense vehicle state of motion (to enable planning and tracking) Sense environment (to enable mapping)

5 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Accelerometer Gyro Sonar altimeter Magnetometer GPS Sensor fusion algorithm Laser scanner Other sensors Sensor fusion algorithm State Estimates Ambient map Obstacle/target recognition Stereo cameras Sensing of vehicle motion states Sensing of environment for mapping Advantages Advantages: Improved accuracy/better estimates, especially when in proximity of obstacles Sensor loss tolerant (e.g. because of faults, or GPS loss indoors, under vegetation or in urban canyons, etc.) Proposed approach Proposed approach: Recruit vision sensors for improved state estimation (This paper)

6 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Kalman-type Sensor fusion by Kalman-type filtering to account for measurement and process noise: States: Inputs: Outputs Measures: Classical Navigation System x : = ( v E T B ; ! B T ; r E T OB ; q ) T y : = ( v E T G ; r E T OG ; h ; m B T ) T u : = ( a T acc ; ! T gyro ) T z : = ( v T gps ; r T gps ; h sonar ; m T magn ) T _ x ( t ) = f ¡ x ( t ) ; u ( t ) ; º ( t ) ¢ y ( t k ) = h ¡ x ( t k ) ¢ z ( t k ) = y ( t k ) + ¹ ( t k ) ^ x ( t k + 1 ) = ¹ x ( t k + 1 ) + K ( t k + 1 ) ¡ z ( t k + 1 ) ¡ ¹ y ( t k + 1 ) ¢

7 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Accelerometer Gyro Sonar altimeter Magnetometer GPS Stereo cameras KLT Sensor fusion algorithm Other sensors State Estimates Outlier rejection Vision-Based Navigation System Kanade-Lucas-Tomasi tracker: Tracks feature points in the scene across the stereo cameras and across time steps Each tracked point becomes vision-based motion sensor Has own internal outlier rejection algorithm Vision sensors

8 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Feature point projection: Stereo vision Stereo vision: disparity Kanade-Lucas-Tomasi (KLT) computed with Kanade-Lucas-Tomasi (KLT) algorithm p = ¼ ( d ) d C = b d p C d = p 1 ¡ p 0 1 P O B b f C C 0 c c 0 1 c 0 3 c 0 2 p r OB d 0 Vision-Based Position Sensor Effect of one pixel error on estimated distance (BumbleBee X3 camera) ▶ Remarknoisy Remark: stereo vision info from low res cameras is noisy, need care

9 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Feature Point Tracking ▼ Left camera Time k ▶ Time k+1 ▶ ▼ Right camera Tracking across cameras Tracking across time steps

10 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA P O B b f C C 0 c c 0 1 c 0 3 c 0 2 p r OB d 0 Vision-Based Motion Sensor d d t ( r E + R c B + RC d C ) = 0 Differentiate the vector closure expression: Apparent motionmotion sensor Apparent motion of feature point on image plane (motion sensor): _ p C = ¡ MC T ¡ R T v E B + ! B £ ( c B + C d C ) ¢ Attitude Linear velocity Angular velocity

11 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Vision-Based Motion Sensor 1.For all tracked feature points, write motion sensor equation new output of the vehicle states This defines a new output of the vehicle states 2.Measure apparent motion of feature pt: Kalman filtering 3.Fuse in parallel with all other sensors using Kalman filtering z : = ( v T gps ; r T gps ; h sonar ; m T magn ;:::; d T v i s i on ; d T 0 v i s i on ;::: ) T Measured apparent velocity (due to vehicle motion) new augmented measurement vector This defines a new augmented measurement vector: GPS, gyro, accelerometer, magnetometer, altimeter readings + two (left & right cameras) vision sensor per tracked feature point y : = ( v E T G ; r E T OG ; h ; m B T ;:::; d ( t k + 1 ) C T k + 1 ; d ( t k + 1 ) C 0 T k + 1 ;::: ) T

12 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Outlier Rejection Outlier Outlier: A point which is not fixed wrt to the scene A false positive in the tracking KLT algorithm false info on the state of motion Outliers give false info on the state of motion, need a way to discard them from the process Apparent point velocity due to estimated vehicle motion Measured apparent velocity lengthdirection Drop tracked point if the two vectors differ too much in length and direction ▶ Two stage rejection Two stage rejection: 1.KLT internal 2.Vehicle motion compatibility check

13 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Examples: Pirouette Around Point Cloud Cloud of about 100 points Temporary loss of GPS signal (for 100 sec < t < 200 sec) conservative results To show conservative results: Only <=20 tracked points at each instant of time Vision sensors operating at 1Hz

14 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Examples: Pirouette Around Point Cloud Filter warm-up Begin GPS signal loss End GPS signal loss Classical non- vision-based IMU Vision-based IMU Remark Remark: No evident effect of GPS loss on state estimation for vision-augmented IMU

15 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Examples: Flight in a Village ◀ Scene environment and image acquisition based on Gazebo simulator Rectangular flight path in a village at 2m of altitude Three loops: 1.With GPS 2.Without GPS 3.With GPS

16 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Filter warm-up Begin GPS signal loss End GPS signal loss Some degradation of velocity estimates without GPS Filter warm-up Begin GPS signal loss End GPS signal loss Some degradation of velocity estimates without GPS Examples: Flight in a Village ◀ No evident effect of GPS loss on attitude estimation

17 Vision-Augmented Inertial Navigation POLITECNICO di MILANO DIA Conclusions inertial navigation systemvision motion sensors Proposed novel inertial navigation system using vision motion sensors Basic concept demonstrated in a high-fidelity virtual environment Observed facts Observed facts: Improved observability of vehicle states No evident transients during loss and reacquisition of sensor signals Higher accuracy when close to objects and for increasing number of tracked points Computational cost compatible with on-board hardware (PC-104 Pentium III) Outlook Outlook: Testing in the field Adaptive filtering: better robustness/tuning Recruitment of additional sensors (e.g. stereo laser-scanner) BumbleBee X3 camera


Download ppt "POLI di MI tecnicolano VISION-AUGMENTED INERTIAL NAVIGATION BY SENSOR FUSION FOR AN AUTONOMOUS ROTORCRAFT VEHICLE C.L. Bottasso, D. Leonello Politecnico."

Similar presentations


Ads by Google