MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008.
Published byModified over 3 years ago
Presentation on theme: "MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008."— Presentation transcript:
MUltimo3-D: a Testbed for Multimodel 3-D PC Presenter: Yi Shi & Saul Rodriguez March 14, 2008
Outline ► Motivations and concepts ► Operating System ► 3-D Display ► Input Devices ► Applications ► Discussion
Motivations and concepts in 3D PC ► 3D display ► Contact free on human body ► Higher bandwidth: hands, head, eyes, etc. ► Intelligent intention interpretation ► Less human strain
Operating System of mUltimo3-D ► Developing tool Based on a C++ class library, which is based on and extended the virtual reality development environment dVS/DIVISION from PTC Ltd.
Operating System of mUltimo3-D (cont.) ► Interface agency Collector agent ► Collects and interprets data on the users and makes this information available to other agents. ► All the data collected is stored in the form of user model for later use Placer agent ► Arranges the media objects in the interaction space. ► It avoids overlapping and collision of objects. ► Gradually move away and remove obsolete objects
Operating System of mUltimo3-D (cont.) ► Interface agency (cont.) Starter agent ► Automatic starts applications. Visualization agent ► Critical action confirmation, undo/redo. ► Status display: planning, working, satisfied, surprised, confused Figure 1. Communication states of the avatar: idle state, confirmation request and inquiry.
3-D Display of mUltimo3-D ► Autostereoscopic displays Lenticular lens based 3D display. Figure 2. 3D display using an array of lenticular lenses to channel the left and right eye images to the respective eye (L and R refer to image stripes of the left and right images of a stereo pair).
3-D Display of mUltimo3-D (cont.) ► Face to Face The display screen is placed on a platform that turns around to continuously orient the screen to the user ’ s eyes. The L/R picture changes when the screen moves which avoids the spatial and optical distortions and makes the view angle wider.
3-D Display of mUltimo3-D (cont.) ► Full focus problems Physical and virtual positions are different, thus undesirable when supporting direct virtual object manipulation with for example, hands. Figure 3. With conventional 3D displays, interaction between virtual and real objects in the grasp area is affected by blurred effects, either the virtual object is sharp (left) or the real object is sharp (middle), depending on what object the user is looking at. On the accommodation display both objects are seen in full focus (right).
3-D Display of mUltimo3-D (cont.) ► Possible solution Accommodation 3-D display Figure 4. The user accommodates and perceives object on the aerial image plane. The Fresnel lens projects the exit pupils of the stereo projector to the left and right eye, respectively (field-lens principle) and, hence, completely separates the constituent stereo images (in the prototype system there is virtually no crosstalk).
Input Devices of mUltimo3-D ► Multiple input signals Figure 5. The overall system diagram of the mUltimo3D testbed
Input Devices of mUltimo3-D ► Multiple input signals Figure 6.1. The input devices of mUltimo3-D
Input Devices of mUltimo3-D ► The mUltimo3-D system Figure 6.2. The mUltimo3-D system
Input Devices of mUltimo3-D (cont.) ► Voice input IBM Via Voice is integrated in the mUltimo3-D. Logox3 speech synthesis software from G-DATA is used as speech output
Input Devices of mUltimo3-D (cont.) ► Video head tracker Skin color is used as tracking feature Eye blinking is used to locate the positions of eyes on the head Able to cope with changes in size, orientation and illumination. (demo)
Input Devices of mUltimo3-D (cont.) ► Video hand tracker Two cameras and infrared are used to capture and segment hands Figure 7. Video hand tracker (the device attached to the keyboard) at the mUltimo3D user interface and the segmented hand.
Input Devices of mUltimo3-D (cont.) ► Video hand tracker Hand center is determined and radial lines are derived from the center Different line features are used to recognize the hand shape pattern as well as movement (demo)
Input Devices of mUltimo3-D (cont.) ► Video gaze tracker Used cornea-reflex method Low-intensity infrared light Center of the pupil and the reflection of the light from the cornea are found to determine the direction of the eyes gazing at (demo) Figure 8. The eye image captured by the gaze tracker (left) and an enlarged part of the eye showing the pupil and light reflections on the cornea (right).
Applications in mUltimo3-D Figure 9. The placer agent arranges the display of the media objects in the interaction space. The tool objects (cubes) show icons for gaze-controlled selection of applications. The cubes turn overproportionally when the head moves, so that all sides become visible with slight head movements.
Applications in mUltimo3-D (cont.) Figure 10. Screenshots of the Internet browser (left) and the multimodal 3D CAD application (right). By changing the viewing perspective (head movement), hidden objects can be viewed and addressed by eye gaze. The 3D CAD object shelf covering parts of the video windows has currently been moved to the front by gaze interaction.
Discussion ► Combining and optimizing multimodel information in the best way ► Real-time database updating ► More user testing and feedback