Presentation is loading. Please wait.

Presentation is loading. Please wait.

Applications Presented by: Michal Kamara. Outline Motivation-shadow removal from multi- projector displays Dynamic shadow elimination for multi- projector.

Similar presentations

Presentation on theme: "Applications Presented by: Michal Kamara. Outline Motivation-shadow removal from multi- projector displays Dynamic shadow elimination for multi- projector."— Presentation transcript:

1 Applications Presented by: Michal Kamara

2 Outline Motivation-shadow removal from multi- projector displays Dynamic shadow elimination for multi- projector displays Dynamic shadow removal from front projection displays Augmented generation of consistent shadows for Augmented Reality

3 The use of large-scale front-projection display has emerged in recent years: –Immersive teleconferencing –Virtual reality environments –Augmented reality One fundamental problem: Shadows easily remove the user from the visually immersive experience. Motivation

4 What can be done? Back-projection –Problems of: space considerations, intensity and sharpness attenuation and mechanical complexity. Constraint user movement –Some interactive display environments adaptively render a model based on user’s position. –May prevent the user of viewing particular parts of the model. Or…

5 Dynamic Shadow Elimination For Multi-Projector Displays Rahul Sukthankar Tat-Jen Cham Gita Sukthankar 2001

6 Outline System Overview Automatic Alignment Reference Images Shadow Detection Shadow Elimination Iterative feedback results

7 System Overview

8 System Overview – cont ’ The system must accurately align the projected images on the display surface. Each occluder can create multiple shadows on the display surface. The system must precisely adjust projector output to compensate for each occlusion. Shadow boundaries must be treated carefully.

9 Algorithm ’ s steps Occluded display Reference image Row slide Multi projector display, After shadow elimination Occluded display Alpha mask (camera frame) Apply Alpha mask Camera-screen homography Screen-projector1 homography Screen-projectorN homography Projector 1 Projector N Shadow-compensated slide (screen frame)

10 Automatic Alignment We need to find T such that: For all points and in 2 different coordinate systems. Because T is a planar projective transform, it can be determined up to an unknown scale factor by 4 pairs of matching points.

11 Automatic Alignment – cont ’ Camera-projector homography, can be determined by: –iteratively projecting a random point from the projector onto the display surface and observing that point in the camera. –projecting a rectangle from the projector, where the coordinates of the rectangle corner in the projector are known and can be located at camera frame using image processing techniques. The display area is either automatically determined by the camera, or interactively specified by the user. Camera-screen homography, can be determined by the corners of the display surface.

12 Automatic Alignment – cont ’ The projector-screen calibration is important to avoid distortions and double images on the display surface, that may be caused from the off-center projections.

13 Reference Images Creating the reference images is done during the initialization phase, when the scene is occluder-free. For each slide the system projects, several camera images are capture and are pixel- wise averaged to create a reference image for this slide.

14 Shadow Detection During operation, the camera acquires a current image which is compared to the reference image. A pixel-wise image differences between reference and current camera images is used to detect shadows. A median filter (5X5) is applied to the difference image to reduce the effects of camera noise and minor calibration errors.

15 Shadow Elimination From the difference image a mask called alpha mask is constructed: Where is the camera image at time t. is the reference image. is a system parameter, set to 0.25, to avoid rapid fluctuations. And Note that there is only one alpha mask for all projectors.

16 Shadow Elimination – cont ’ The alpha mask is computed in the camera frame and hence must be transformed into the screen frame. Well, we know how to do that… Applying the alpha mask to the current slide is done by replacing the alpha channel of the slide image. What channel ??? An alpha channel is another channel (along with the 3 channels: R,G,B) that may be added to an image. That channel describes the importance of each pixel when composite over another image.

17 Shadow Elimination – cont ’ After applying the alpha mask to the screen slide, it is transformed for each projector and…displayed.

18 Iterative Feedback Since there is no good photometric model of the environment, there is not a precise prediction of how much light is needed to remove the shadow. That is why the iterative feedback loop is used. The system will continue to add light to shadowed regions until it will appear as in the reference image. Surprisingly it creates robustness, suppose 1 of the projector fails, the alpha mask will uniformly increase. Main drawback is time –shadows are eliminated in approximately 3 iterations.

19 Results

20 Results – cont ’ To examine image quality over the shadow removal process, SSD error of gray scale intensities was calculated compared to the reference image. As expected, the hard shadow from the single projector is the major source of error.

21 Results – cont ’ frame error The “occluder” is entered at t=4. The “occluder” is leaving at t=11. The remaining low errors are referred to the “halo” effect.

22 Dynamic Shadow Removal from Front-Projector Displays Christopher Jayen Stephen Webb R.Matt Steele Michael Brown W.Brent Seales 2001

23 Outline System Overview Requirements Calibration –Geometric Calibration –Color Calibration Creating an expected image Alpha mask generation Results Main drawbacks

24 System Overview Very similar to previous system, with one main difference: –The expected image is created from projector frame buffer using the calibration during operation. This difference derives a new type of calibration, color calibration.

25 Requirements Screen points are illuminated by more than 1 projector. At least 1 camera is able to observe the screen surface at all times.

26 Calibration Critical both for shadow detection and removal. A two phase process, performed prior to use of the system. –Geometric Calibration –Color Calibration

27 Geometric Calibration Very similar as in the previous algorithm, only now the calibration is directly between camera and projector. Given a camera and projector pair, calibration determines the transform from pixels in the camera plane to their corresponding positions in the projectors’ frame buffers.

28 Geometric Calibration cont ’ Reminder: We need to find A such that: for all points in the camera and all in the projector. Because A is a planar projective transform, it can be determined up to an unknown scale factor by 4 pairs of matching points. We can find such points with iteratively projecting a random point from the projector onto the display surface and observing that point in the camera.

29 Geometric Calibration cont ’ The accuracy of A can be measured with: In this study, 10 matching pairs for calculating A were used and 50 points for calculating calibration error. To improve results, a technique called Monte Carlo was used.

30 Color Calibration A given camera C observes the display surface, while uniform color images of increasing intensity are iteratively projected from projector P. For each projected color image, the mean color intensity is computed over the corresponding observed image. This is computed for each channel separately. The mean value over 10 trials is computed for each color channel. Dose not refer to color differences between the projectors.

31 Measured transfer function for each color channel

32 Color Calibration cont ’ The transfer function is of the form: Where is a color transfer function for color channel C, the other 4 parameters are fit to the measured datapoints using a technique called the nonlinear optimization Levenberg-Marquardt [et al 1998]. Those color transfer functions provide a straightforward way to predict how a color in projector space will appear in the camera image.

33 Color Correction results Observed image Predicted image, without color correction with color correction

34 Creating an expected image In a dynamic display the imagery may change in an unpredictable way (user movement, simulations, video data). The expected image must account for the changing display. The expected image is the basis for subsequent modification of projector frame buffer pixels, so we want it to be as accurate as possible.

35 Creating an expected image-cont ’ An expected image is recovered by: –Wrapping all projector pixels in to the camera frame (geometric calibration). For the high accuracy, a super sampling technique is used. –Apply color correction (color calibration): Where: is the expected image after geometric calibration.

36 Predicted image: Example Camera view: Before color correction

37 Alpha mask generation Expected image is compared to the captured imagery by a subtraction of color components. That leads to 2 delta images,,. Each delta image is filtered (3X3 median) to remove the effect of sensor noise. All the above is happening in the camera coordinate frame. Using the camera-projector homography, the delta images are warped to the reference frame of each projector for correction.

38 Alpha mask generation – cont ’ Once a delta image has been aligned to a projector, an appropriate alpha mask is computed as follows: Where is the maximum intensity change between any 2 frames to avoid rapid fluctuations. The alpha blending process takes into account whether incoming alpha values should be added or subtracted from the alpha channel currently being projected.

39 Resulting Alpha mask Example Difference image

40 Results

41 Main drawback It takes the system about 3-4 frames to converge to a blended image. This is not an interactive rate.

42 Automation generation of consistent shadows for Augmented Reality Katrien jacobs Jean-Daniel NahmiasCameron Angus Alex Reche Celine Loscos Anthony Steed Katrien jacobs Jean-Daniel Nahmias Cameron Angus Alex Reche Celine Loscos Anthony Steed2005

43 Outline Motivation The problem Previous work Method overview Shadow detection step –Automatic estimate of the shadow intensity Shadow Protection step Shadow Generation step

44 Motivation A wide range of applications use computer generated animations in combination with pictures of real scenes. –Medical training –Medical surgery –Entertainment Some require an instantaneous inclusion between the virtual elements and the real ones. Consistent shadow of the virtual objects gives a correct geometric interpretation Correct lighting enhances the feeling that the virtual objects are part of the real scene.

45 The problem This doesn't seem natural: The shadow lies correctly on the ground but overlap incorrectly with the real shadow.

46 Previous work Since the early 90’s a few solutions for the illumination inconsistency have been proposed. Most of them assume that a model of the real scene is available. If not, it is reconstructed using photos from different viewpoints. These usually leads to a mismatch between the simplified geometry and the texture. newIn this paper, a new procedure is presented that offers a solution regardless of the quality of the geometric reconstruction.

47 Geometric Reconstruction, example Outdoor scene Reconstructed geometry Shadow created based on geometry Mismatch between geometry and texture

48 Method overview The system is applied on scenes with one main real light source. The real element’s geometry and the position of the light source only need to be known approximately. A three-step mechanism is designed: –Shadow detection step –Shadow protection step –Shadow generation step


50 Shadow detection step In order to protect the existing shadows in the scene from any post-processing, the shadow pixels in the texture need to be identified. –First a shadow contour estimate is calculated using the geometry and the light source position. –Next, the exact shadow contour is extracted using an edge detector, in this case, Canny edge detector.

51 Edge detection example Input for the edge detector Using the geometric estimation an accurate edge detection is done

52 Shadow detection step – cont ’ Correct detection will occur when: –The position of the geometrical estimate is close to that of the real shadow, regardless of the difference in shadow shape or detail. –The shadow is hard or soft and shows a relatively high contrast with the background. –The contrast of the shadow and the background is larger than the contrast in the texture pattern of the background. The computation speed of the shadow edge detector depends on the size of the real shadows.

53 Shadow detection step: automatic estimate of the shadow intensity Once the true shadow contour is known, it is possible to calculate a scaling factor per material in shadow that reflects the color intensity in the shadow region.

54 Shadow detection step: automatic estimate of the shadow intensity } The average in the shadow region } The average in the non shadow region C={R,G,B} SR- Shadow Region NSR- Non Shadow Region - The number of pixels in SR - The number of pixels in NSR

55 Shadow Protection step Binary shadow mask is created in order to protect those points inside a real shadow from any scaling. The scaling factor is chosen to match the color of the non-overlapping areas with the points inside the real shadow.

56 Shadow Generation step A real-time shadow method such as shadow maps or shadow volumes is used to generate the virtual shadows. The intensity of the shadow relates to the appropriate scaling factor computed in the shadow protection step. Overlap between real and virtual shadows is prevented by using the mask generated in the shadow protection step. The intensities of the pixels in the non- overlapping regions are calculated by scaling the texture color with the scaling factor.

57 Results Geometric estimation of the shadow

58 Real scene Estimated shadow in yellow Green area is sent to edge detection Edge detection

59 Real-time results Virtual man walking around real laptop

60 References Rahul Sukthankar, Tat-Jen Cham, Gita Sukthankar Dynamic Shadow Elimination for Multi-Projector Displays Proceedings of the IEEE (CVPR), 2001 Christopher Jaynes, Stephen Webb, Matt Steele, Michael Brown, W. Brent Seales Dynamic Shadow Removal from Front Projection Displays Proceedings of the IEEE Visualization, 2001 Katrien Jacobs, Jean-Daniel Nahmias, Cameron Angus, Alex Reche, Celine Loscos, Anthony Steed Automatic generation of consistent shadows for augmented reality ACM International Conference Proceeding Series; Vol. 112 archive, Proceedings of the 2005 conference on Graphics interface

Download ppt "Applications Presented by: Michal Kamara. Outline Motivation-shadow removal from multi- projector displays Dynamic shadow elimination for multi- projector."

Similar presentations

Ads by Google