Presentation on theme: "Free-viewpoint Immersive Networked Experience February 2010."— Presentation transcript:
Free-viewpoint Immersive Networked Experience February 2010
FINE Consortium PartnerCountryRole in the projectType MEDIAPROESProject coordinator, scientific and technical leadership. computer vision, graphics and virtual view synthesis, GPU-based parallel computing architecture to speed-up computation and allow real-time performance. Professional user representative. Industry BIT MANAGEMENT DEGeometry and texture coding protocols, software architecture and implementation of delivery framework and 3D visualization players. Home user representative. SME KTHSEMulti-view video coding and rendering enabling transmission/distribution. Action recognition, human motion recovery, tracking from 2D vision to 3D graphics. University UHBEReal time high quality image-based algorithms for pose estimation and photorealistic rendering of 3D characters. Camera calibration, GPU- based parallel computing algorithms University BMESMathematical models, image processing and view interpolation algorithms for real-time and inpainting Research centre RETEVISIONESNext Generations Networks specifications and standardsIndustry TRACABSETracking software and stereo vision acquisition. Home user representative. SME EVSBEAutomation software for TV production and post-production. OB Van integration of multimedia special effects media servers. Professional user representative. Industry
Abstract Freeviewpoint Immersive Networked Experience, FINE will be focussed on: Researching and developing a novel end- to-end architecture for the creation and delivery of a new form of live media content. FINE will introduce the concept of live free-viewpoint content which will provide rich and compelling immersive experiences The 3D reconstruction of the action will open a lot of new possibilities for content creation based on live events that will be exploited on several platforms: Internet, broadcast TV, interactiveTV, mobile, online video games and digital cinema.
Research areas Four main research areas: 1.Fast (real-time) and highly accurate algorithms for smooth view interpolation and photorealistic 3D reconstruction of live events from multiple, high quality live video streams. 2.Real-time tracking and marker-less motion capture of multiple characters. 3.New coding and transmission technologies to allow the synchronized delivery of geometry, imagery and metadata to a wide variety of end- users through New Generation Networks.. 4.Integration of the developed free- viewpoint technologies in a networked end-to-end architecture, and their validation in experimental productions.
Scientific and Technical Objectives O1 - To develop robust and accurate methods for capturing calibrated and synchronized multi-viewpoint video. O2 - To generate free-viewpoint video representation of a 3D scene at real- time performance from the captured data. O3 - To develop robust marker-less motion capture techniques to generate accurate multiple character 3D animation streams from video/film sources. O4 - To create efficient free-viewpoint video coding algorithms suitable for real-time delivery and define strategies and specifications for next generation networks O5 - To develop image-based algorithms for photorealistic rendering of 3D characters synchronised with live-action video feeds O6 - To develop a new user centred and multi platform framework to integrate and exploit free-viewpoint technologies in experimental productions