Presentation is loading. Please wait.

Presentation is loading. Please wait.

Deriving Lights from Pixels Presented By: WAIL ALI EL EBEEDY By: By: Web Address: ArchitectureWeek - Tools - Deriving Lights from Pixels - 2003_0528.htm.

Similar presentations


Presentation on theme: "Deriving Lights from Pixels Presented By: WAIL ALI EL EBEEDY By: By: Web Address: ArchitectureWeek - Tools - Deriving Lights from Pixels - 2003_0528.htm."— Presentation transcript:

1 Deriving Lights from Pixels Presented By: WAIL ALI EL EBEEDY By: By: Web Address: ArchitectureWeek - Tools - Deriving Lights from Pixels - 2003_0528.htm Topic Number: Topic Number: Date: Date:

2 Integrating computer-generated building images with real photographs can be an effective addition to the architectural design process.. One must also know the photo's camera position and parameters and the shapes and material properties of the real objects. To simplify the process, we have developed a user-friendly and practical method for obtaining the needed real-world data from photographs. Our method is based on a simple and easily constructible device in conjunction with software we have written. With our system, users can determine orientations, colors, and intensities of light sources as well as the surface colors of objects. This "shadow-based" method It uses photographs of a simple device constructed by placing a small cylindrical stick on the center of a white plate. Because the shape of the device is known, it is easy to create a digital model of it. Comparing the model of the device and its cast shadows to its photograph gives us a baseline for setting parameters for the rendering software.

3 User-Assisted Software Procedure Our program, called "LightRecon," computes locations and intensities of light sources based on pixel values selected by users from a photograph of a real scene. We also use the commercial rendering application Maya to recover camera information by matching a virtual scene with the photograph. To create a field record, we first collect survey data from a real scene, such as lighting condition, surface information of objects in the scene, camera height and angle, dimension of the device, distance between camera and the device, film format and size, lens size, and exposure information. Then we photograph the real scene without any reference objects for later compositing with the computer-generated objects. We also photograph: a) a gazing ball to obtain additional information about the environment, b) the device for illumination data, and c) material samples so we can match the color of synthetic and real objects. We then run LightRecon to obtain the scene's lighting information, such as the intensities and locations of lights. The software allows us to select pixels of a photograph, on which it performs calculations. In order to run LightRecon, we need a scene data file from the 3D replica, which is a text file containing information about the matched camera and the device. As a final output, it creates a text file to provide the lighting information. We provide a prototype of scene data, so users can simply insert the required information into the file based on its instruction.

4 Further Refinements To improve the usefulness of the initial intensities of the light sources that we get from LightRecon, we readjust them based on pixel values of both a rendered image and a background photograph. First we render the virtual model of the device with initial lighting intensities. Then we open both the rendered image and the background photograph in Adobe Photoshop or similar application, and collect a pixel value from each image, especially pixels on the white surface of the device. We find a ratio of each color channel value of two selected pixels and multiply the initial intensities by the ratios to get new intensities of the light sources. Finally, we render the scene again with the updated illumination information and repeat the process, if necessary, until the ratio values approach unity. A similar reiterative method enables users to determine colors for synthetic objects based on the photographs of material samples taken in a real scene. The procedure takes into account the lighting environment of the real scene, which affects the color rendition of the material samples. By comparing the rendered and photographed colors, LightRecon computes parameters for modifying the modeled colors. This method provides users with an accessible process for recovering illumination information. In addition, the method helps artists understand the overall concept of image-based lighting techniques. We plan to further develop LightRecon before making it available to others. This method provides users with an accessible process for recovering illumination information. In addition, the method helps artists understand the overall concept of image-based lighting techniques. We plan to further develop LightRecon before making it available to others.

5 Discuss this article in the Architecture Forum... Discuss this article in the Architecture Forum... Jaemin LeeJaemin Lee and Ergun Akleman are with the Visualization Sciences Program of the Department of Architecture at Texas A&M University. Jaemin Lee A longer version of this article first appeared in Thresholds: Design, Research, Education and Practice in the Space between the Physical and the Virtual, Proceedings of the 2002 Annual Conference of the Association for Computer-Aided Design in Architecture, edited by George Proctor.


Download ppt "Deriving Lights from Pixels Presented By: WAIL ALI EL EBEEDY By: By: Web Address: ArchitectureWeek - Tools - Deriving Lights from Pixels - 2003_0528.htm."

Similar presentations


Ads by Google