Introduction of Mobility laboratory & Collaboration with CALTECH Noriko Shimomura Nissan Mobility Laboratory.

Slides:



Advertisements
Similar presentations
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Advertisements

Street Crossing Tracking from a moving platform Need to look left and right to find a safe time to cross Need to look ahead to drive to other side of road.
ELECTRONIC GUIDING CANE FINAL PRESENTATION Students : David Eyal Tayar Yosi Instructor : Miki Itzkovitz Technion – Israel Institute Of Technology Electrical.
Rear Lights Vehicle Detection for Collision Avoidance Evangelos Skodras George Siogkas Evangelos Dermatas Nikolaos Fakotakis Electrical & Computer Engineering.
27-28 October 2011 Sofia, Bulgaria Future Internet Applications for Traffic Surveillance and Management APPLICATIONS OF VIDEO SURVEILLANCE SYSTEMS FOR.
Facial feature localization Presented by: Harvest Jang Spring 2002.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Image Formation by Mirrors and Lenses
Eng.mohammed Telmesani. Number of accidents In 2009 (before SAHER) accidents 6142 death people KSA looses 17 people per day Number of accidents.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
Video Object Tracking and Replacement for Post TV Production LYU0303 Final Year Project Spring 2004.
A Self-Supervised Terrain Roughness Estimator for Off-Road Autonomous Driving David Stavens and Sebastian Thrun Stanford Artificial Intelligence Lab.
C C V C L Sensor and Vision Research for Potential Transportation Applications Zhigang Zhu Visual Computing Laboratory Department of Computer Science City.
Traffic Sign Recognition Jacob Carlson Sean St. Onge Advisor: Dr. Thomas L. Stewart.
Robust Lane Detection and Tracking
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
Scale Invariant Feature Transform (SIFT)
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
California Car License Plate Recognition System ZhengHui Hu Advisor: Dr. Kang.
Face Recognition and Retrieval in Video Basic concept of Face Recog. & retrieval And their basic methods. C.S.E. Kwon Min Hyuk.
© 2009, TSI Incorporated Stereoscopic Particle Image Velocimetry.
College of Engineering and Science Clemson University
Quick Overview of Robotics and Computer Vision. Computer Vision Agent Environment camera Light ?
AFS Main Beam (Driving Beam) Improvements Presentation to WP th Session November 2008 Informal document No. WP th WP.29, November.
Autonomous Surface Navigation Platform Michael Baxter Angel Berrocal Brandon Groff.
This action is co-financed by the European Union from the European Regional Development Fund The contents of this poster are the sole responsibility of.
GM-Carnegie Mellon Autonomous Driving CRL TitleAutomated Image Analysis for Robust Detection of Curbs Thrust AreaPerception Project LeadDavid Wettergreen,
Bala Lakshminarayanan AUTOMATIC TARGET RECOGNITION April 1, 2004.
1. This seminar paper is based upon the project work being carried out by the collaboration of Delphi- Delco Electronics (DDE) and General Motors Corporation.It.
Motion Object Segmentation, Recognition and Tracking Huiqiong Chen; Yun Zhang; Derek Rivait Faculty of Computer Science Dalhousie University.
AMMAR HAJ HAMAD IZZAT AL KUKHON SUPERVISOR : DR. LUAI MALHIS Self-Driven Car.
3D SLAM for Omni-directional Camera
Test Intersection: Status, Results, Preparation for State Data Collection Lee Alexander Pi-Ming Cheng Alec Gorjestani Arvind Menon Craig Shankwitz Intelligent.
Chiung-Yao Fang Hsiu-Lin Hsueh Sei-Wang Chen National Taiwan Normal University Department of Computer Science and Information Engineering Dangerous Driving.
Implementing Codesign in Xilinx Virtex II Pro Betim Çiço, Hergys Rexha Department of Informatics Engineering Faculty of Information Technologies Polytechnic.
ESP Electronic Stability Programs
Stereo Object Detection and Tracking Using Clustering and Bayesian Filtering Texas Tech University 2011 NSF Research Experiences for Undergraduates Site.
Use of GIS Methodology for Online Urban Traffic Monitoring German Aerospace Center Institute of Transport Research M. Hetscher S. Lehmann I. Ernst A. Lippok.
The University of Texas at Austin Vision-Based Pedestrian Detection for Driving Assistance Marco Perez.
National Taiwan A Road Sign Recognition System Based on a Dynamic Visual Model C. Y. Fang Department of Information and.
I-CAR. Contents Factors causing accidents and head on collision Existing technologies New technologies Advantages and drawbacks conclusion.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
Curb Detector.
Safety Distances and Object Classifications for ACSF Informal Document: ACSF
Scene Text Extraction Using Focus of Mobile Camera Egyul Kim, SeongHun Lee, JinHyung Kim Artificial Intelligence & Pattern Recognition Lab, KAIST, Korea.
Optical Sensors in Automotive Applications “Solving Sensing Problems with Photonics” Photonex 2010, 3 rd November 2010 Roger Hazelden Technology Leader,
Best Practice T-Scan5 Version T-Scan 5 vs. TS50-A PropertiesTS50-AT-Scan 5 Range51 – 119mm (stand- off 80mm / total 68mm) 94 – 194mm (stand-off.
Sample Workbook for Crash Avoidance Technologies Focus Groups.
National Taiwan Normal A System to Detect Complex Motion of Nearby Vehicles on Freeways C. Y. Fang Department of Information.
VEMANA INSTITUTE OF TECHNOLOGY,BANGALORE
ADVANCED DRIVER ASSISTANCE SYSTEMS
ESP Electronic Stability Programs
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
DIGITAL SIGNAL PROCESSING
Unit 3 – Driver Physical Fitness
Yun-FuLiu Jing-MingGuo Che-HaoChang
Machine learning in Action: Unpacking the Biographical Questionnaire
Embedding Technology in Transportation Courses
Team A – Perception System using Stereo Vision and Radar
Vehicle Segmentation and Tracking in the Presence of Occlusions
Dongwook Kim, Beomjun Kim, Taeyoung Chung, and Kyongsu Yi
March 2017 Project: IEEE P Working Group for Wireless Personal Area Networks (WPANs) Submission Title: Deep Leaning Method for OWC Date Submitted:
眼動儀與互動介面設計 廖文宏 6/26/2009.
Progress report 2019/1/14 PHHung.
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
Danger Prediction by Case-Based Approach on Expressways
CSSE463: Image Recognition Day 30
Presented by Mohammad Rashidujjaman Rifat Ph.D Student,
Presentation transcript:

Introduction of Mobility laboratory & Collaboration with CALTECH Noriko Shimomura Nissan Mobility Laboratory

Objective of this presentation 1.Mobility laboratory & our aims 2.Examples of our research 3.Collaboration with CALTECH by Sep Introduce Nissan’s researches and needs - have good collaboration by Sep Contents

Alarm Controller Sensor Mobility Laboratory - Vehicle control - Human machine interface - Object detection, Road environment recognition Our aim - Reducing traffic accidents - Providing new driving assistance systems - Improving autonomous vehicle technology Mobility Laboratory & our aims

Camera Laser radar 1.Forward environment recognition using laser radar and camera 2.Nighttime driving support system using infra- red camera ! Examples of our research

Z X Y Axis of lens Camera Scanning Laser Radar Scan area Sensor Configuration Forward environment recognition

Example of Observed Sensor Data

SLR Lane maker recognition Camera Grouping Stationary/Moving Object Distinction Preceding vehicle , vehicles, structures on the road (2) (1) (signs, delineators) Camera: lane maker recognition Laser Rader: Object detection & distinction Flowchart

Outline of Lane Maker Recognition Y P(x,y,z) YIYI O X Axis of lens Z f Camera XIXI Height from the road surface Dy Road Model : X = (ρ/2) ・ Z 2 + φ ・ Z – Dx+ i ・ W ( i=0,1) Y = ψ ・ Z + Dy Camera position : Dx , Dy, θ , φ , ψ ( θ=0 ) Dx W ρ, φ , Dx , Dy , ψ are calculated using edge positions by regression analysis Lane width edge positions i=0 left lineright line image example i=1

Image input Detection region determination edge point detection Parameters on the previous image Lane maker detection Edge image b y Sobel operator Parameter estimation Edge image Curvatures Pitch angle Yaw angle Lateral position Bounce edge point on lane maker Flowchart & edge point detection

Recognition result

Recognition result (rainy day)

SLR Lane maker recognition Camera Grouping Stationary/Moving Object Distinction Preceding vehicle , vehicles, structures on the road (2) (signs, delineators) Camera: lane maker recognition Laser Rader: Object detection & distinction Flowchart

Object Detection by SLR Grouping1 Grouping2 Detected points Delineators Vehicles Sign(overhead) Z X SLR ~ Grouping method ~ - located closely - in the same distance - in the same direction DelineatorVehicle

Solution to the Difficulty → Delineator Distinction Tagging → Tag check Z X Δx - Δz + Δz - Δx + Tagged objects are detected along the lane.The relative speed is not estimated correctly. Tag

Object Distinction Preceding vehicle Based on ・ Stationary/Moving ・ Delineator recognition ・ Width of objects ・ Relative position to lanes Vehicles Road structures

(Before applying the proposed method) Detection and Discrimination with Relative speed and Grouping -- Preceding vehicle, vehicles, road structures --

Detection and distinction result with the proposed method -- Preceding vehicle, other vehicles, road structures --

Detection and distinction result with the proposed method -- Preceding vehicle, other vehicles, road structures --

Camera Laser radar 1.Forward environment recognition using laser radar and camera 2.Nighttime driving support system using infra- red camera ! Examples of our researches

! ~ Adaptive Front lighting System with Infra-Red camera ~ Nighttime driving support system IR image (temperature) IR-AFS → Illuminate the pedestrian by Adaptive Front lighting System The driver can find the pedestrian easily at night including some objects that may be pedestrians

Effect of IR-AFS

Difficulty in IR based pedestrian detection Summer night (27 ℃ ) Ordinary approach of pedestrian detection with IR camera Large area has the same temperature as human Binary image → IR image 25 - 37 ℃ Binary image →

Our Aim Nighttime driving support system → Season independent pedestrian detection algorithm (Making use of other information than temperature) Effective nighttime driving support (It doesn't affect the driver, even if there are some false detection) Available in any seasons

Features in detection - There is no texture on IR image. - Many wrinkles on the cloths, few straight lines - Few wrinkles on artificial objects(cars, buildings) → Wrinkles and rough surface activate corner filters corner Strong > Weak weakStrong→

: Illumination Target: Detected pedestrian Explanation of our Algorithm : feature point Video

Collaboration with Caltech in CALTECH’s technologies 2. Nissan’s needs recognition methods that we have to improve including extension term Collaboration w/ Vision Lab: Want to make collaboration better

CALTECH’s technologies Focusing methods Probabilistic model Constellation model, etc. Learning method Feature detection (SIFT, Harris, etc. ) Nissan interests and focuses on

Nissan needs and requirement pedestrian detection road region recognition (without lane markers) improved lane marker recognition (available for many types of lane markers)

pedestrian detection

improved lane marker recognition (available for many types of lanes) Botts' dots

road region recognition (without lane marker)

Idea for collaboration /w no cost extension Caltech Pedestrian detection Nissan Road region detection Requirement for Pedestrian detection Accuracy: more than 75% False Alarm: less than 5% Min target size: 10x20 Processing time: up to 500ms (e.g. 100ms)

Schedule and Target in Sep Dataset (provided by Nissan, AVI, VGA) First dataset: by the end of Aug Second dataset: in Jan. 2008, for validation Deliverable in Sep Documents of proposed method Result of experiment, detection ratio Mit-term report & information exchange (Jan. 2008) mid-term report(minimun target size, processing time etc.) provide additional dataset for validation 75% min target size ROC brain storming start developing new method Sep. 07 Jan. 08 develop & improve the method Sep. 09 validation using dataset

Deliverble end of Sep.2007 Singniture of Dr. Perona on the first page Report written by Seigo Watanabe. Jan Mid-term report written by Post Dr. in Caltech more concrete target(minimun target size etc.) end of Sep final report witten by Post Dr. in Caltech Documents of proposed method and validation results

Road Model iWDZZX x   2 2 Dy ZY  Z Y φ 0  i1  i  Z X W φ  Road curvature Yaw angle Lateral position Lane width Pitch angle Camera height = Bounce ρ, φ , Dx , Dy , ψ are calculated using edge positions by regression analysis iWZZX   2 2 ZY  ψ ψ Dy Dx