Presentation is loading. Please wait.

Presentation is loading. Please wait.

Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.

Similar presentations


Presentation on theme: "Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais."— Presentation transcript:

1 Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais

2 PROJECT OBJECTIVES Create application where 3D objects can be manipulated using hand gestures. Interface must be simple and intuitively easy to use. Translation, rotation and selection of objects must be possible.

3 3D Objects to be manipulated will be molecules. Hand gestures will be input using a web camera. Set of hand gestures will be specified where each gesture relates to a specific task.

4 Project decomposed into three phases: 2D Image Processing (M. Chang) Data Analysis Phase (S. Preston) Front-end Visualisation (L. Matshoba)

5 System Architecture

6 AIMS –Feature extraction of sequence of hand images. –Elimination of noise. –Adequate performance in real-time

7 INPUT –Logitech Webcam is used as the capturing device. –Sequence of hand images captured by the webcam. –Capable of capturing 30 frames per second.

8 IMPLEMENTATION –Segmentation of image to isolate the hand. –Image smoothing and filtering to eliminate noise. –Threshold of image to isolate the desired features.

9 OUTPUT –Sets of features extracted from the sequence of hand images. –Basic representation of the hand structure.

10 CHALLENGES –Efficient algorithm implementation capable of real-time processing. –Clearing of background noise. –Precise and accurate identification of hand features.

11 Success Factor –Processing of 24 frames of hand images per second. –95% accuracy of feature extraction. –Elimination of noise.

12 AIMS – To analyse data provided by image processing phase. – Determine what hand gesture the user has carried out. 2 training methods will be used: – Neural Network – Principle Component Analysis (PCA)

13 A pattern classification problem. – Common application of training techniques such as neural networks and PCA. Many similar examples that suggest it is feasible: – Face recognition using neural networks.

14 Input is data extracted from 2D images. Logitech Webcam captures at most 30 frames per second. Input will consist of representation of 24 frames.

15 PCA and neural network will require a training data set. Hundreds of inputs will be required. Not likely to pose a problem as data collection requires no expense and little resources.

16 Output will be provided for the front- end visualisation phase. Simple output: One variable indicating the gesture that has been performed. Possible speed variable as well.

17 PROBLEM Input data is captured from a single still camera – thus input data is in 2D form. But the users performs gestures in 3D world. Tilting and rotating of the hand could make it difficult to detect the correct gesture.

18 SOLUTION Need appropriate set of gestures. Need a well designed neural network. OTHER PROBLEMS? Speed and efficiency not a concern.

19 Whether neural network implementation recognises at least 95% hand gestures correctly. Whether PCA implementation recognises at least 95% hand gestures correctly. Whether hand gestures recognised agree with at least 95% of those recognised by polhemus tracker.

20 AIMS -To produce a usable application for the gesture recognition interface -Test the usability of the interface for a real world application -Create a testing system to compare 2D and 3D gesture driven interface -To test the usability of the gesture system.

21 ■The front end visualization will deal with two main kinds of input. ■Input from the 2D hand gestures as extracted by the Data Analysis Phase ■Input from the 3D hand gestures – assumed to be more accurate. ■ A metric will be generated to measure the gesture recognition capabilities of the 2D hand gesture extraction.

22 ■visual feedback of the system ■Accuracy metric measuring difference between 2D and 3D gesture recognition

23 ■Interface for the viewing of 3D molecule structures. ■Different molecule level of detail views offered – selected regions more detailed ■Visible section rotation ■Seamless changes between ‘Ribbon’ and ‘Ball & Stick’ representations

24 Whether the system can run in real time. Accuracy of data extracted from 2D images. 95% hand gestures are recognised correctly. Whether motion capture and learning technique implementations agree on 95% of gestures.

25 Not entirely new concept. At the very least build an application where basic transformations can be done. Compare effectiveness of using learning techniques approach against motion- capture approach.


Download ppt "Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais."

Similar presentations


Ads by Google