Presentation is loading. Please wait.

Presentation is loading. Please wait.

Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information.

Similar presentations


Presentation on theme: "Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information."— Presentation transcript:

1 Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information Engineering National Taiwan Normal University 1

2 Outline Introduction System Flow Chart ◦ Preprocessing Stage ◦ Improved Region Growing Method ◦ Object Segmentation ◦ Gesture Analysis Experimental Results Conclusions and Future Work 2

3 Introduction Smart Classroom ◦ Smart Classroom usually refers to a room with an instructor station, including a computer and audiovisual equipment. ◦ These equipments allow the instructor to teach using a wide variety of media. Classroom 2.0 ◦ A new concept of a smart classroom ◦ It tries to create a future classroom environment with modern technologies (ex., video processing and speech recognition) embedded during the process of teaching and learning. 3

4 Introduction Four systems are developed in Classroom 2.0: ◦ The Intelligent Roll-Call System  To recognize students who are present in the classroom ◦ The Intelligent Teaching Response System  To control the technologies used in the classroom, including the control of the software systems ◦ The Intelligent Content Retrieval System  To give feedback on the quality of an answer to a given question ◦ The Intelligent Classroom Exception Recognition System  To recognize and track the behaviors of students in a classroom and collect data for educational research 4

5 Introduction The student gesture recognition system ◦ A subsystem of the Intelligent Classroom Exception Recognition system ◦ Goal: To recognize the student gestures in a classroom ◦ The student gestures:  raising of hands to ask/answer questions  dozing off during the lecture  taking naps  standing up  sitting down 5

6 Introduction Assumptions: ◦ Preserve some flexibility of the focus and the position of the camera ◦ Handle various lightings ◦ Do not construct the background image 6 Reflected light Fluorescent lights Windows

7 System Flow Chart 7 Video sequence input Main line localization Motion detection Foreground pixel extraction Foreground pixel identification Improved region growing method Object segmentation Gesture analysis

8 Main Line Localization 8 Edge detection Output main line localization Main line identification Video sequence input Horizontal edge extraction Horizontal edge projection Main line candidate extraction

9 Main Line Localization 9 Edge detection Output main line localization Main line identification Video sequence input Horizontal edge extraction Horizontal edge projection Main line candidate extraction

10 Motion Detection 10 I t-1 (p), I t (p): the intensity values of a pixel p at time t-1 and t, respectively. M(p): the motion magnitude of pixel p.

11 Foreground Pixel Extraction 11 40% 5%. Accumulate the pixel numbers from the input frame to form a histogram of the Hue and Cr values. Assume background occupies larger regions than foreground. The system normalizes and sorts these pixel numbers on the histogram. The top 40% pixels are classified as the background pixels, and the bottom 5% pixels are classified as the candidates of the foreground pixels. Background pixels Foreground pixels

12 Foreground Pixel Identification 12 α, β, and γ: constants. F t (p): the foreground pixel probability of pixel p at time t M(p): the normalized motion magnitude of pixel p C(p): the normalized location value of pixel p in the sorted Hue-Cr histogram F t-1 (p): the foreground pixel probability of pixel p at time t-1 Foreground pixels Background pixels

13 System Flow Chart 13 Sequential image input Main line localization Motion detection Foreground pixel extraction Foreground pixel identification Improved region growing method Object segmentation Gesture analysis

14 Improved Region Growing Method Main function ◦ select the foreground pixels between the maximum and minimum main lines ◦ set the selected pixels as the seeds to grow the foreground regions (call RegionGrowing function) Function Main() { the set of foreground pixels, and let r = 0, If (the y _axis location of x is between the maximum and minimum main lines){ If ( x does not belong to any labelled region){ Label the region number of x as r ; RegionGrowing( x ); r = r +1; } } } 14

15 Improved Region Growing The RegionGrowing function ◦ Select a pixel from the foreground regions  All neighbouring pixels whose properties are similar to the selected pixel are classified into the same region.  If the neighbouring pixels of the selected pixel belong to the background regions, then the selected pixel is set as the boundary pixel of the foreground regions to stop this iteration. Until all pixels in the foreground regions have been processed 15

16 Object Segmentation Weight initialization : the region areas of node n and n’, respectively : the distance between the means of node n and n’ ◦ The weight of the link will be large if the areas of node n and n’ are large and the distance between these two nodes are short. ◦ The weight indicates the possibility of two nodes belonging to the same object. 16

17 Object Segmentation 17 Weight adaption ◦ Increase weight ◦ if the link connects two nodes which have the same neighbours ◦ the possibility that these two nodes belong to the same object is high ◦ Decrease weight ◦ if the link connects two nodes which contain no common neighbours ◦ the possibility that these two nodes belong to the same object is low

18 Object Segmentation Select objects which contain motion pixels 18

19 Gesture Analysis The system divides the student gestures into six classes. ◦ These six classes can be regarded as states and be constructed to form a finite state machine.  Assume that the initial state of the finite automaton is the normal gesture.  Define 14 rules (corresponding to transitions a to l ) to translate the states.  Consider the following attributes:  The area, the center position, and the motion of the object  The areas, the center positions, and the colors of the regions belonging to the object 19 Lie prone

20 Gesture Analysis 20 Raise two hands Raise the left hand Normal Raise the right hand Stand up Lie prone a b c d e f g h i j k l m n Rule a : If the center positions of some left regions of object i move upwards, and the area and center position of object i increase, then the student i may be raising his/her left hand.

21 Experimental Results 21 Input images Region growing Object segmentation An example: a student raises her right hand

22 Experimental Results 22 Raises her left hand Raises two hands

23 Experimental Results 23 Lies prone Stands up

24 Experimental Results Total numbers of gestures Accuracy rate Correct recognition numbers False positive Raise the left hand 3171%228 Raise the right hand 3373%2410 Raise two hands 3272%235 Lie prone 3278%252 Stand up 3168%2118 Total 15972%11543 24

25 Experimental Results 25 Normal Raise the right hand Raise the left hand Stand up Raise two hands Lie prone 3 1 8 6 4 2 5 9 7

26 Conclusions and Future Work Conclusions We propose a students’ gesture analysis system in the lecture theatre. - Main line location extraction - Foreground pixel identification - Region growing - Object segmentation - Gesture analysis Future work - Combine prior knowledge - Improve the rules of the finite automaton - Use multiple PTZ cameras 26

27 Thank you for your attention! 27


Download ppt "Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information."

Similar presentations


Ads by Google