Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Advanced Topics in Virtual Reality Tae Soo Yun Dept. of Digital Contents Dongseo University Fall 2002 based on notes from Soon Ki Jung, KNU Wohn, KAIST.

Similar presentations


Presentation on theme: "1 Advanced Topics in Virtual Reality Tae Soo Yun Dept. of Digital Contents Dongseo University Fall 2002 based on notes from Soon Ki Jung, KNU Wohn, KAIST."— Presentation transcript:

1 1 Advanced Topics in Virtual Reality Tae Soo Yun Dept. of Digital Contents Dongseo University Fall 2002 based on notes from Soon Ki Jung, KNU Wohn, KAIST ……

2 2 Table of Contents 1.Introduction : What is VR ? 2.Psychological and Cognitive Issues 3.VR System Anatomy 4.Virtual Perception 5.Interaction 6.Virtual Worlds: Representation, Creation and Simulation 7.Virtual Worlds: Rendering 8.Networked VR Systems and Shared Virtual Environements 9.Image-based Virtual Reality 10.Augmented Reality

3 Basic model of perception & cognition (chap 2) H-sensor perception cognition motion control H-effector action Human Natural environment sensing knowledge

4 Conceptual Model of VR H-sensor perception cognition motion control H-effector P-effectorv P-sensor L-effector L-sensor sensing action Human Virtual environment avatar virtualagent Logical devices for displacements, angles, events. virtual object V-effector V-sensor

5 5 Functional diagram displaying (Sec. 3-4,5,6,7) rendering (chap. 7) simulation (Sec. 6-3,4) interaction (chap. 5) Virtual perception (chap. 4) Sensing (Sec. 3-3) VW Authoring (Sec. 6-2) VW DB (Sec. 6-1)

6 6 sensing and virtual perception Image processor Speech recognition Facial/face expression recognition Body gesture recognition Hand gesture recognition Controller Image processor Video camera Signal processorMicrophone Video camera Glove Sensing Virtual perception

7 Hand Gesture Recognition 4-2. Body Posture and Gesture Recognition 4-3. Face and Facial Expression Recognition 4-4. Gaze Tracking 4-5. Speech Recognition

8 Hand Gesture Recognition 1. Classification of Hand Movements 2. Issues on Gestural Communication 3. Recognition Technology

9 9 Examples  Pointing to real and abstract objects and concepts  Waving, saluting, praying (two flat hands up together)  Live or die decisions in the Roman amphitheater (thumb up/down)  Counting (fingers and/or hand)  Rejective (index up moving left and right)  Appreciative (hand clapping) gestures  Traffic control signs  Conducting of an orchestra  sign languages

10 10 1. Classification of Hand Movements  Semiotic Hand gesture to communicate meaningful information and results from shared cultural experience  Ergotic (Manipulative)  Epstemic tactile experience or haptic exploration  Hand gestures Movements that we make with our hands to express emotion or information, either instead of speaking or while we are speaking. ==>

11 11 Hand Gestures  Kendon (1988) : in regard to the levels of linguisticity spontaneous gesture (gesticulation) language-like gestures pantomimes emblems sign languages  McNeill (1992) : further classification of gesticulation iconics metaphorics deictics beats

12 12  Nespoulos and Lecours (1986) arbitrariness Arbitrary gestures : need to be learned Mimetic gestures : common within a culture Deictic gestures (specific / generic / function indication) function Quasi-linguistic Coverbal expression - illustrative / expressive / paraverbal Social interaction - phatic / regulatory Meta-communication Extra-communication

13 13 Summary of Classification Hand Motion unintentional movement intentional movement communicative (hand gesture) direct manipulation spontaneous (natural) language-like gesture pantomimes emblems sign language iconic metaphoric deictics beats spatiographic kinetographic pictographic [McNeill(1991)] [Kendon(1988)]

14 14 Example Gestures Pictographic gestures Kinetographic gestures rectangular shape triangle shape move-updownmove-rightmove-leftmove-far come-closer round shape delete or cancel rotate Deictics fixation Metaphoric gestures

15 15 2. Issues on Gestural Communication  Context detection  Identifying hand gestural attributes  Tracking of hand gesture  Low-level (visual pattern) interpretations  Integration of the low-level interpretations  Gesture segmentation  Correlating the hand motion and virtual world  Integration (or fusion) with other modalities

16 16 (1) Context detection  Global context detection When to look at the user’s hand movement? Existence of speech(Gesticulation 의 경우 ) Use speech semantic functions (eg.) “Move the ball from the table to there” “Move the brush like this” Hands’ movement into gesture region

17 17  Context within gesture Gestural phases [Kendon 1980] [MIT1996] - preparation - pre-stroke hold - stroke - post-stroke hold - retraction

18 18 (2) Attributes of hand gestures  Static attributes Hand configuration (posture) Hand orientation Hand position relative to the body  Non-static attributes Movement direction Movement speed / acceleration Magnitude of movement Shape (arm movement trajectory) linear / nonlinear

19 19 Pattern characteristics of hand gesture  Multiple concurrent attributes  Spatio-temporal dynamicity Spatial property - 3D Shape variation - 3D Orientational variance - 3D Positional variance Temporal property - Temporal variance : Interperson / Intraperson

20 20 (3) Tracking of hand gesture  Sensor-based technology Gloves Position Tracker  Vision-based technology x y z x y z 10 flex angles

21 21 (4) Low-level interpretation  Classical pattern recognition problem, or not?  Similar to the low-level vision.

22 22 (5) Integration of low-level interpretations

23 23 (6) Segmentation  Similar to the intermediate-level vision problem.

24 24 (7) Context-sensitive interpretation  A gesture may have different meanings in different contexts.

25 25 (8) Multi-modal integration  Integration of hand gesture, gaze, speech, body movement.

26 26 3. Recognition technology  Isolated  continuous  action-completion based interpretation  action-following (e.g., conducting)  one-hand  two-hand

27 27 Static attributes (posture and orientation)  Feature-based approach [Roy94]  statistical approach [Newby94][Darrell95]  Neural network [Vaananen93][Kim95]

28 28 Non-static attributes  Feature-based approach [Rubine91]  Neural network [Murakami91][Waldron95]  Statistical approach [Wilson95][Starner95]

29 29  Gesture posture transition Statistical correlation Neural network-based recognition at every moment, and then temporal integration

30 30  Gesture arm movement Fuzzy configuration states [Wilson95] Hidden Markov Models [Starner95][Nam96][Wilson96] 1 1/4 1/ transition probability observation probabilities abcabc

31 31  Posture+Movement at the same time Recurrent neural network [Taguchi91] Two-level back-propagation network [Waldron95]

32 32 Linear movements only Rotary or shaping movement Repetition of gesture Continuous gesture Temporal dynamicity (with significant velocity, acceleration) Isolated connected (multiple strokes) Increasing recognition difficulty in engineering point of view (single stroke) (n  1 strokes) * Difficulties

33 Global Initial Node Global Final Node pause.... Connecting HMMs Null Transitions Null Transitions  Continuous arm movement  HMM Network [Nam96]

34 34 Temporal integration for continuous recognition  [Wexelblat 95]

35 35 Recognition & Integration HMM network Situation-Tracker token giving environment perception by finding current marking of GPN [GUARD] Posture Classifier Orientation Classifier Gesture- level Tracker Attribute Tracker high-level meaning in application whole gesture pattern each attribute pattern

36 36 Further issues  Automatic segmentation of fully continuous gesture  Two-hand gesture  Multi-modal interaction

37 Body gesture recognition  TPs and lecture notes of Section 4-2 have been prepared by Chang-Whan Sul.  human body as input devices in VR physical control over the avatar task / command level interface

38 38 approaches (tracking)  electromagnetic / acoustic / mechanical trackers simple, tracks accurately in realtime invasive, restricted area, distorted signal  vision-based human body tracking non-invasive, virtually no limit in working area computationally heavy non-realtime when precision is required rough recognition when realtime performance required

39 research on body gesture  Psychology MLD(Moving Light Display) interpretation 3D structure from motion gait, gender person identification  Computer Vision Restricted class of motions Gait analysis Sports (Tennis action) Simple general motions HIA(Humans In Action) IVE(Interactive Video Environment) Mandala TM

40 40 hurdles in vision-based body gesture  human body motion articulated motion too many DoF’s non-linear, non-rigid motion  Limitation of vision technology well controlled environment needed. ex - uniform/stationary background, controlled illumination

41 41 classification of existing works

42 42 Procedures Feature extraction tracking Feature matching (inter-frame/inter-view) 3-D model reconstruction recognition

43  Methods Template matching Feature vector classification Connectionist approach Neural Network Probabilistic approach Hidden Markov Model  Proper use of domain-specific knowledge on the human body is the key.

44 44 Motion Analysis of Articulated Objects for Optical Motion Capture SoonKi Jung, VR Group, KAIST 1997  model-based, 3D model, point tracking pelvis chest L_thighR_thigh head L_u_armR_u_arm L_forearmR_forearm L_calfR_calf T w,R w RpRp 3-D human model

45 45  Track point features from multiple cameras by HIEKF(hierarchical iterative extended Kalman filtering)

46 46  Captured motion used to control avatars motion transition / blending / variation issues adjust to various avatar model

47 47 Spfinder (stereo person finder) Ali Azarbayejani, C.Wren, A Pentland, MIT Media Lab.  used in ALIVE and the descendants

48 48  2-D blob tracking(pfinder) 2-D blob with same color domain knowledge about human body tracks head, hands, feet Kalman Filter

49 49  3-D blob tracking based on 2-D blob tracking Self-calibrates 3-D blob correspondences

50 50 HIA (Humans In Action) D.M. Gavrila & L.S. Davis (Univ. Maryland)  3-D model-based skeleton + tapered superquadric  specially designed suite  multiview generate-and-test strategy  similarity measure: chamfer distance edge-based feature  hierarchical search sampled space around current model  used to build motion DB

51 51 model edge 1 view edge 1 similarity 1 model similarity model edge n view edge n similarity n projection Movie file URL ftp://vr.kaist.ac.kr/pub/cs778/{d_circlwalk,d_twoelbrot, e_twoelbrot}.mpeg

52 52 First Sight M.K. Leung & Y.H. Yang  2-D model stick figure + ribbon-represented body  ribbon using moving edge  body labeling constraints structural I - joint deviation structural II - combination shape constraints balance

53 53  Tracking gymnastics motions  No recognition

54 54 Mandala  [warme94]  User = avatar ( video image )  2-D model-less  physical interaction by 2-D intersection  global features in avatar control position / velocity of center of mass

55 55 Appearance-based motion recognition of human action James W. Davis, MIT, 1996  Motivation blurred image sequence patterns: where & how the motion occurs Movie file URL ftp://vr.kaist.ac.kr/pub/cs778/blur4.avi Movie file URL ftp://vr.kaist.ac.kr/pub/cs778/blur4.avi

56 56 MEI  Motion Energy Image (MEI) : where index to action library Mahalanobis distance btwn Hu moments  translation,rotation,scale - invariant

57 57 MHI  Motion History Image (MHI) : how collapse the temporal motion information into a single image

58 58 Action recognition  Motion recognition Mahalanobis distance btwn Hu moments max/min duration of motion  Multiple cameras extension verification Movie file URL ftp://vr.kaist.ac.kr/pub/cs778/demoSeq.avi Movie file URL ftp://vr.kaist.ac.kr/pub/cs778/demoSeq.avi

59 Spatio-temporal method  Niyogi & Adelson “Analyzing and Recognizing Walking Figuress in XYT” Gait detection/recognition of frontoparallel walking Modeling bounding countour: snake contour ->stick figure -> angle signals Classification of walker : nearest neighbor technique

60 Hidden Markov Model  Yamato, Ohya and Ishii “Recognizing Human Action in Time-sequential Images using Hidden Markov Model” Tennis actions Observation symbol: vector-quantized mesh features

61 61 Summary  Emerging technology still at research level not as hot as face or hands will serve as one of input modalities in future VR existing systems very rough recognition of limited set of motions other application areas ubiquitous computing AR wearable computers

62 Face and Facial Expression Recognition 1. Face detection 2. Face identification 3. Face tracking 4. Lip-reading 5. Facial expression recognition 6. Others (Gender recognition, etc.)

63 1. Face Detection  Where is face?  Simple background Edge extraction Deformable templates (Color) histogramming  Complex background Color information Motion information Feature-based

64 2. Face Identification  Whose face is it?  Feature-based vs. Template matching  Matching Method Direct matching Neural-net

65 65 Feature-based: an example  Brunelli (1991) 35 가지의 특징 사용 눈썹의 두께 눈의 중심에서의 수직위치 눈썹의 곡선 코의 수직위치와 폭 입의 수직위치와 폭, 높이 뺨의 모양 코의 위치에서의 얼굴 폭 코와 눈 사이의 거리

66 66 Template matching  매우 단순한 기법 전체 이미지의 grey-level template 을 사용해서 매칭  비교 기준 image correlation  서로 다른 시점에서의 인식 각 얼굴마다 여러 개의 template 들이 필요  deformable template 시점의 변화에 따라 얼굴이 어떻게 변형되는가에 대한 모 델 설정

67 67 Template matching: an example  Baron (1981) 이미지의 데이터베이스 구축 얼굴 각각이 데이터베이스의 한 entry field 4 개의 마스크 (mask) 의 집합  얼굴의 정면도와 눈, 코, 입  눈의 위치에 정규화 인식과정 전체 데이터베이스 이미지와 비교  적절한 metric: Euclidean distance 매칭 스코어를 나타내는 벡터를 출력

68 68  Feature-based vs. Template Feature-based 빠른 인식 속도 좋은 메모리 효율  특징이 1 바이트이고 35 개의 특징 : 35 바이트면 충분 Template 단순하며 좋은 성능 이미지의 밝기 성분에 민감  correlation 값이 크게 변동 낮은 메모리 효율  전체 얼굴에 대한 이미지를 저장

69 69 Eigen face  Eigen face information theory 에 바탕을 두고 연구 eigen vector 높은 차원의 얼굴 특징 벡터 eigen face eigen vector 를 다시 복구하여 만든 이미지  원래의 이미지는 eigen face 의 일차결합으로 나타낼 수 있다. Face space 높은 이미지 질을 갖는 일정 개수의 eigen face

70 70

71 3. Face Tracking  Given sequence of images, where does the head move to? Tracking methods Optical flow Feature tracking Pattern matching and tracking

72 4. Lip Reading  Lip-reading Benoit’s Experiment Lip-reading Methods oral cavity shadow optical information of oral cavity motion based analysis synchronized optical/acoustic database

73 73 5. Facial Expression Recognition  표정 생성과 긴밀한 관계  The problem 이미 결정되어 있는 얼굴의 움직임으로 categorize 하는 것 FACS 를 참고해서 얼굴의 움직임을 분류  Methods Mechanical 마리오 시스템  가장 직접적이고 쉽지만 사용자에게 불편하고 활동 이 제한 Vision-based Static  deformable template 을 이용, 실험의 훈련과정에 많 이 사용 Dynamic  optical flow 와 같은 얼굴의 움직임 변화를 추적

74 Template Matching (Vanger 1995) - templates for eyes and mouth only.

75 75  Template matching (Darrel 1993) 표정을 추적 표정 분류의 문제 얼굴 전체 template / 구성요소의 template 이미지의 correlation 차이를 이용해서 표정 추적

76 Optical Flow and Motion Field Matching (Essa 1995)

77 77 자체 얼굴 움직임 모델 제작 얼굴 움직임 모델을 사용한 motion field 매칭

78 78  Lip contour (Mose 1995)  valley contour 를 이용  빛의 밝기에 덜 민감  얼굴의 위치, 표정 등에 적게 영향  실시간에 구축 가능


Download ppt "1 Advanced Topics in Virtual Reality Tae Soo Yun Dept. of Digital Contents Dongseo University Fall 2002 based on notes from Soon Ki Jung, KNU Wohn, KAIST."

Similar presentations


Ads by Google