National Taiwan A Road Sign Recognition System Based on a Dynamic Visual Model C. Y. Fang Department of Information and Computer Education National Taiwan Normal University, Taipei, Taiwan, R. O. C. C. S. Fuh Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R. O. C. S. W. Chen Department of Computer Science and Information Engineering National Taiwan Normal University, Taipei, Taiwan, R. O. C. P. S. Yen Department of Information and Computer Education National Taiwan Normal University, Taipei, Taiwan, R. O. C.
National Taiwan Outline Introduction Dynamic visual model (DVM) Neural modules Road sign recognition system Experimental Results Conclusions
National Taiwan Introduction -- DAS Driver assistance systems (DAS) The method to improve driving safety Passive methods: seat-belts, airbags, anti-lock braking systems, and so on. Active methods: DAS Driving is a sophisticated process The better the environmental information a driver receives, the more appropriate his/her expectations will be.
National Taiwan Introduction -- VDAS Vision-based driver assistance systems (VDAS) Advantages: High resolution Rich information Road border detection or lane marking detection Road sign recognition Difficulties of VDAS Weather and illumination Daytime and nighttime Vehicle motion and camera vibration
National Taiwan Subsystems of VDAS Road sign recognition system System to detect changes in driving environments System to detect motion of nearby vehicles Lane marking detection Obstacle recognition Drowsy driver detection ……
National Taiwan Introduction -- DVM DVM: dynamic visual model A computational model for visual analysis using video sequence as input data Two ways to develop a visual model Biological principles Engineering principles Artificial neural networks
Dynamic Visual Model Conceptual component Perceptual component Sensory component Information acquisition CART neural module STA neural module Yes No Video images Focuses of attention Spatialtemporal information Categorical features Category Feature detection Pattern extraction CHAM neural module Patterns Data transduction Action Episodic Memory
National Taiwan Human Visual Process Transducer Sensory analyzer Class of input stimuli Perceptual analyzer Conceptual analyzer Physical stimuli Data compression Low-level feature extraction High-level feature extraction Classification and recognition
National Taiwan Neural Modules Spatial-temporal attention (STA) neural module Configurable adaptive resonance theory (CART) neural module Configurable heteroassociative memory (CHAM) neural module
National Taiwan STA Neural Network (1) akak Output layer (Attention layer) njnj Inhibitory connection Excitatory connection Input layer w ij aiai xjxj nknk nini
National Taiwan STA Neural Network (2) The input to attention neuron n i due to input stimuli x : The linking strengths between the input and the attention layers corresponding neurons w kj nini njnj nknk Input neuron Attention layer rkrk Gaussian function G
National Taiwan STA Neural Network (3) The input to attention neuron n i due to lateral interaction: Lateral distance “Mexican-hat” function of lateral interaction Interaction +
National Taiwan STA Neural Network (4) The net input to attention neuron n i : : a threshold to limit the effects of noise where 1< d <0
National Taiwan STA Neural Network (5) t p 1 pd 1 The activation of an attention neuron in response to a stimulus. stimulus activation
National Taiwan ART2 Neural Network (1) CART r p u w v x q y Input vector i Input representation field F 1 Attentional subsystem Orienting subsystem G G G G G Category representation field F 2 Reset signal + + + + + + + + + + + + + + + + + + - - - - - Signal generator S
National Taiwan ART2 Neural Network (2) The activities on each of the six sublayers on F 1 : where I is an input pattern where where the J th node on F 2 is the winner
National Taiwan ART2 Neural Network (3) Initial weights: Top-down weights: Bottom-up weights: Parameters:
National Taiwan HAM Neural Network (1) CHAM j Output layer (Competitive layer) Excitatory connection Input layer w ij xjxj i vivi v1v1 v2v2 vnvn
National Taiwan HAM Neural Network (2) The input to neuron n i due to input stimuli x : n c : the winner after the competition
National Taiwan Road Sign Recognition System Objective Get information about road Warn drivers Enhance traffic safety Support other subsystems
National Taiwan Problems contrary light side by side shaking occlusion
National Taiwan Information Acquisition Color information Example: Red color Shape information Example: Red color edge
National Taiwan Results of STA Neural Module — Adding Pre-attention
National Taiwan Locate Road Signs — Connected Component
National Taiwan Categorical Feature Extraction Normalization: 50X50 pixels Remove the background pixels Features: Red color horizontal projection: 50 elements Green color horizontal projection: 50 elements Blue color horizontal projection: 50 elements Orange color horizontal projection: 50 elements White and black color horizontal projection: 50 elements Total: 250 elements in a feature vector
National Taiwan Conceptual Component — Classification results of the CART Training Set Test Set
National Taiwan Conceptual Component — Training and Test Patterns for the CHAM
National Taiwan Conceptual Component — Training and Test Patterns for the CHAM
National Taiwan Conceptual Component — Another Training Patterns for the CHAM
National Taiwan Experimental Results of the CHAM
National Taiwan Experimental Results
National Taiwan Other Examples
National Taiwan Discussion Vehicle and camcorder vibration Incorrect recognitions Input patterns Recognition results Correct patterns
National Taiwan Conclusions (1) Test data: 21 sequences Detection rate (CART): 99% Misdetection: 1% (11 frames) Recognition rate (CHAM): 85% of detected road signs Since our system only outputs a result for each input sequence, this ratio is enough for our system to recognize road signs correctly.
National Taiwan Conclusions (2) A neural-based dynamic visual model Three major components: sensory, perceptual and conceptual component Future Researches Potential applications Improvement of the DVM structure DVM implementation