Presentation on theme: "Immersives environments and projection mapping july and august 2012 Presented by Olivier DELAHOUSSE Poonpong BROONBRAHMFrédéric VIGNAT Jean Marc RobertJulien."— Presentation transcript:
immersives environments and projection mapping july and august 2012 Presented by Olivier DELAHOUSSE Poonpong BROONBRAHMFrédéric VIGNAT Jean Marc RobertJulien Abril
Introduction: Immersion Most common used senses: sight and earing Perception of deepness trough sterescopic sight and head coupled perspective when moving. Spatialisation trough binaural earing Concepts
Introduction: Virtual Reality Computer system artificial immersive world navigate trough the virtual world and interact with its objects create In order to
Immersive: head-mounted display Projected: CAVE Semi-immersive: simulator Introduction: Type of VR
Introduction: Application Entertainment ( immersive gaming) Medical (phobia treatment) Training (simulator) Art (interactive installation) Education (E-learning)
Introduction: examples of VR systems Walk simulator HMD with haptic feedback Dome for military training Parachute simulator HMD withVirtusphereVuzix VR Glasses
What’s a CAVE? A Cave Automatic Virtual Environment is an immersive virtual reality environment where projectors are directed to three, four, five or six of the walls of a room-sized cube. The name is also a reference to the allegory of the Cave in Plato's Republic where a philosopher contemplates perception, reality and illusion. Geometric advantage: simplicity of orthogonal views, no warping, no edge blending.
Walailack CAVE Hardware Hardware: presentation of the different elements Software Software: choice of the more relevant one and development of applications in the CAVE
HardwareSoftware Hardware: 1)Presentation of the different elements Already existing: The CAVE display structure 3 walls of 3m *2.8m -> Field of view of 270° horizontal and 100° vertical Grey rear projection material Steel RQ: Human field of view : 200° vertical / 140° horizontal.
HardwareSoftware Hardware: 1)Presentation of the different elements Already existing: Computer, graphic cards and projectors Intel core2duo 3Gz NVDIA quadro FX1700 TripleHead2Go Acer DLP S D ready
HardwareSoftware Hardware: 1)Presentation of the different elements Human/machine interaction device: The kinect Xbox Sound system: 3.1 Dolby Surround sound system
Software Software: 1) Choice of the more relevant one The requirements: Display 3 orthogonal views Enable development of applications Hardware
Software Software: 1) Choice of the more relevant one Aszgard/syzygy: Hardware Example with azgard Quest3D: Quest3d interface
Software Software: 1) Choice of the more relevant one Hardware Virtools: Virtools interface Unity3D: Unity3D interface
Software Software: 1) Choice of the more relevant one AdvantagesDrawbacks Azgard/sy zyggy Open source Development of the environment by coding use a cluster of VM Virtools Application developpment by Graphical coding Very powerful chanels non open source needs an VERY expensive licence to publish the work Quest3D Application developpment by Graphical coding Open source Broad user community Price (2000/ Royalties) Unity3D Application developpment by script coding Broad user community Free version powerful Script coding at first less intuitive than graphical coding Hardware
Software Software: 1) Development of application in the CAVE Hardware Creation the 3 views for an extended desktop setup:
Software Software: 1) Development of application in the CAVE Hardware Import a scene Move the camera/ interact with objects Enable physic behaviours
Software: 2)Middleware How to use the kinect? FAAST FAAST interface User’s skeleton Actions SoftwareHardware
Software: 2)MiddleWare How to use the kinect? VRPN Client in Unity3D SoftwareHardware
Software Software: 2)Middleware How to use the kinect? AdvantagesDrawbacks FAAST Easy to use Can be used in any application must be launched separately from the application It’s just an key board emulator VRPN in Unity3D more intuitive remove one interface Can control a real avatar Only for applications in Unity 3D
Application: Driving simulator
To go further: Development of new applications: Other simulator, E-learning, virtual conference, Virtual visit of place… see video
Enable tactile wall Option 1: Using infrared light, modified camera, blob tracker and osc protocole CCV interface CCV is very robust and well documented TUIO server for multi touch surface. Calibration!
Enable tactile wall Option 2: Using kinect for hand tracking NecTouch 1.0, a Kinect-based multitouch Tuio server. Features : - Simple interface, many options. - 2 calibration modes : Perspective and Warp (non-linear) - Minimode (like CCV) for high framerate (60fps with 10 touch points and more) - Simple calibration (4 corners disposition and hit 'c') - Customizable parameters for better detection over different setups Stability!
Use of gloves for precise finger tracking for object manipulation Use of joystik and 6 DOF mouse for navigation Enable 3D display Depend of applications needs!
Headtracking for real time head coupled perspective Projection matrix for head couped perspective As the user moves within the space, the frusta as to be updated in real-time to provide a correct perspective of the synthetic world
Example of a CAVE with real time head coupled perspective:
About my master thesis research uses of 3d projection mapping in large scale multi-media installations. Note the difference between projecting 3d content on 2d surface and 2d content on 3d surface!
Main difficulties: Issues of projections Issues of scalability and calibration
Issues of projections Projection on a flat surface If your projection target is a flat surface, like a wall and your projector is at an arbitrary position, not exactly facing the part of the wall you want to project to, the projected image looks distorted. Using an homography, you can easily pre-distort the image you project so that it appears undistorted on the surface.
Projection on an arbitrary surface When projecting onto an arbitratry 3d surface, no matter how the projector is positioned and oriented towards the surface the resulting image will mostly look distorted. Note though that there is one point from which the projected image looks perfectly aligned, that is: the position of the projector. It takes then more than an homography to to pre-distort the image you project so that it appears undistorted on the surface!
Virtual replica of the real scene Creating a virtual copy of your real world setup includes three steps: define the origin for your real worlds coordinate system which you will match with.
By rendering the 3d model from the same viewpoint and using the same lens-characteristics as the real projector has for the virtual camera, the resulting image will perfectly fit the projected surface. Any flat textures you give the 3d model (in the above example a simple black cross on white) will look undistorted on the real surface and therefore look and behave naturally like flat textures that look the same independent of the spectators point of view.
3d illusion It really only work from the one point in real world that corresponds to the virtual cameras position. Like this simple example shows, when viewed from about the projectors perspective the text seems to be extruded from the little box. When viewing the same projection from a completely different point, the illusion is gone.
Virtual Camera's perspective is different to real world projector position! With some setups it is just not possible to have the real projector's position match the virtual camera's/projector's position: You may want the virtual scene to be viewed from a position where you cannot place the projector in real world You may want several real world projectors to share one virtual perspective. In those situations you need 2 render-passes: the first pass renders the scene from the desired perspective the second pass renders the scene from the desired real world position of the projectors WHILE the result of the first pass is being projected onto the 3d models surface from the perspective of the virtual camera.
Scalability and calibration Many project needs large surface display (Immersives environment, architecturals mapping etc…) wich make those projects multi projectors system. Big multi projectors system are often impossible to calibrate manualy (Geometric regisration and soft edge blending) Use of Camera feedback
Soft edge blending Without Blending Computed Alpha Masks With Blending
My master project: State of art: almost all 3d projection mapping are pre-recorded-> constrains about the position of spectators/projectors, non-reusability of the content, interraction with the content limited. My goal: investigate the possibility of using game engine such as unity3d to make an real time 3d projection mapping tool for multi-projectors projects with automatic calibration. Fields: virtual reality, computer vision, optic, video processing, audio processing, ergonomy, perception science, network architecture…