Light Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples. Christopher D. Kulla, James D. Tucek, Reynold J. Bailey, Cindy M. Grimm.

Slides:



Advertisements
Similar presentations
Lapped textures Emil Praun Adam Finkelstein Hugues Hoppe
Advertisements

Emil Praun Hugues Hoppe Matthew Webb Adam Finkelstein
A Model for Capturing NPR Shading from Art
Computer graphics & visualization Real-Time Pencil Rendering Marc Treib.
CS123 | INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © 1/16 Deferred Lighting Deferred Lighting – 11/18/2014.
Texture Synthesis on [Arbitrary Manifold] Surfaces Presented by: Sam Z. Glassenberg* * Several slides borrowed from Wei/Levoy presentation.
1 Computer Graphics Chapter 9 Rendering. [9]-2RM Rendering Three dimensional object rendering is the set of collective processes which make the object.
Computer Graphics Visible Surface Determination. Goal of Visible Surface Determination To draw only the surfaces (triangles) that are visible, given a.
CS 551 / CS 645 Antialiasing. What is a pixel? A pixel is not… –A box –A disk –A teeny tiny little light A pixel is a point –It has no dimension –It occupies.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Lapped Textures Emil Praun and Adam Finkelstien (Princeton University) Huges Hoppe (Microsoft Research) SIGGRAPH 2000 Presented by Anteneh.
HCI 530 : Seminar (HCI) Damian Schofield.
Lapped Textures Emil Praun Adam Finkelstein Hugues Hoppe Emil Praun Adam Finkelstein Hugues Hoppe Princeton University Microsoft Research Princeton University.
Image Quilting for Texture Synthesis & Transfer Alexei Efros (UC Berkeley) Bill Freeman (MERL) +=
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Processing Image and Video for An Impressionist Effect Peter Litwinowicz Apple Computer, Inc. Siggraph1997.
NPR - 2D to 3D, painting and rendering Daniel Teece Walt Disney Feature Animation Daniel Teece Walt Disney Feature Animation
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
Basic Rendering Techniques V Recognizing basic rendering techniques.
Painterly Rendering for Animation Barbara J. Meier Walt Disney Feature Animation SIGGRAPH 96.
Non-photorealistic Rendering Pablo Picasso - The Bird Cage No electrons were harmed during the production of this presentation.
Non-Photorealistic Rendering Greg Turk College of Computing and GVU Center.
02/14/02(c) University of Wisconsin 2002, CS 559 Last Time Filtering Image size reduction –Take the pixel you need in the output –Map it to the input –Place.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Beyond Meshes Spring 2012.
Computer graphics & visualization NPR – Non-photorealistic rendering.
Computer Graphics Shadows
Painterly Rendering for Animation – Barbara Meier
Input: Original intensity image. Target intensity image (i.e. a value sketch). Using Value Images to Adjust Intensity in 3D Renderings and Photographs.
Image Analogies Aaron Hertzmann (1,2) Charles E. Jacobs (2) Nuria Oliver (2) Brian Curless (3) David H. Salesin (2,3) 1 New York University 1 New York.
Computer Graphics Mirror and Shadows
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
Polygon Shading. Assigning color to a shape to make graphical scenes look realistic, or artistic, or whatever effect we’re attempting to achieve But first.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
Volumetric Illustration: Designing 3D Models with Internal Textures Shigeru Owada Frank Nielsen Makoto Okabe Takeo Igarashi The University of Tokyo Sony.
Digital Face Replacement in Photographs CSC2530F Project Presentation By: Shahzad Malik January 28, 2003.
COLLEGE OF ENGINEERING UNIVERSITY OF PORTO COMPUTER GRAPHICS AND INTERFACES / GRAPHICS SYSTEMS JGB / AAS 1 Shading (Shading) & Smooth Shading Graphics.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
Multiscale Moment-Based Painterly Rendering Diego Nehab and Luiz Velho
A Sorting Classification of Parallel Rendering Molnar et al., 1994.
CS 450: COMPUTER GRAPHICS REVIEW: INTRODUCTION TO COMPUTER GRAPHICS – PART 2 SPRING 2015 DR. MICHAEL J. REALE.
Filtering and Color To filter a color image, simply filter each of R,G and B separately Re-scaling and truncating are more difficult to implement: –Adjusting.
Creating the Illusion of Motion in 2D Images. Reynold J. Bailey & Cindy M. Grimm Goal To manipulate a static 2D image to produce the illusion of motion.
Lumo: Illumination for Cel Animation Scott F. Johnston.
Paint By Numbers The goal of a visual artist (Hagen): Without modeling detail, painters use brush strokes to: –Represent objects –Direct attention The.
Non-Photorealistic Rendering Motivation: Much of the graphical imagery created is not photographic in nature Particularly in some domains: –Art –Animation.
CHAPTER 8 Color and Texture Mapping © 2008 Cengage Learning EMEA.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
03/28/03© 2005 University of Wisconsin NPR Today “Comprehensible Rendering of 3-D Shapes”, Takafumi Saito and Tokiichiro Takahashi, SIGGRAPH 1990 “Painterly.
Synthesizing Natural Textures Michael Ashikhmin University of Utah.
Orientable Textures for Image- Based Pen-And-Ink Illustration Michael P. Salisbury Michael T. Wong John F. Hughes David A. Salesin SIGGRAPH 1997 Andrea.
Aaron Hertzmann New York University
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
Nonphotorealistic rendering Computational Photography, Bill Freeman Fredo Durand May 9, 2006 Drawing from: NPR Siggraph 1999 course, Green et al.
Painterly Rendering for Animation Introduction speaks of focus and detail –Small brush strokes focus and provide detail –Large strokes are abstract and.
Fine Tone Control in Hardware Hatching Matthew Webb Emil Praun Hugues Hoppe Adam Finkelstein Princeton University Microsoft Research Princeton University.
Non-Photorealistic Rendering FORMS. Model dependent Threshold dependent View dependent Outline form of the object Interior form of the object Boundary.
SIGGRAPH 2007 Hui Fang and John C. Hart.  We propose an image editing system ◦ Preserve its detail and orientation by resynthesizing texture from the.
Image Fusion In Real-time, on a PC. Goals Interactive display of volume data in 3D –Allow more than one data set –Allow fusion of different modalities.
Acquiring, Stitching and Blending Diffuse Appearance Attributes on 3D Models C. Rocchini, P. Cignoni, C. Montani, R. Scopigno Istituto Scienza e Tecnologia.
Graphcut Textures:Image and Video Synthesis Using Graph Cuts
Computer Graphics Chapter 9 Rendering.
Deferred Lighting.
3D Graphics Rendering PPT By Ricardo Veguilla.
The Graphics Rendering Pipeline
Basic Rendering Techniques
Image Quilting for Texture Synthesis & Transfer
Texture Synthesis and Transfer
Advanced Computer Graphics: Texture
Presentation transcript:

light Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples. Christopher D. Kulla, James D. Tucek, Reynold J. Bailey, Cindy M. Grimm Basic Idea Input: Shaded 3D computer generated model. User provided paint sample specifying change from dark to light.  Scanned in from traditional art media or created with a 2D paint program. Output: Model rendered in a style similar to that of the paint sample.  Texture synthesis is used to generate enough “paint” to cover the model (based on the Image Quilting technique by Effros and Freeman, 2001). Techniques: Image Based Texture Synthesis. View Aligned 3D Texture Projection. View Dependent Interpolation. Previous Work Cartoon Shading, Lake et al Technical Illustration, Gooch et al The Lit Sphere, Gooch et al Color- based techniques Hatching, Praun et al Stippling, Deussen Half-toning, Freudenberg Charcoal, Majumder Texture- based techniques Volume texturing, Webb et al Color / Texture combined techniques Stroke-based techniques WYSIWYG NPR, Kalnins et al Painterly rendering, Meier Paint Processing (to extract information for rendering) Image Based Texture Synthesis View Aligned 3D Texture ProjectionView Dependent Interpolation Paint samples have two distinct properties: Color transition. Brush texture. Original sample Unsorted (streaky) trajectory Sorted (smooth) trajectory Brush texture Processing steps: Average every pixel column of the original paint sample.  This gives an unsorted trajectory. Sort this trajectory to produce smooth trajectory. Subtract smooth sorted trajectory from original sample.  This gives the brush texture. User created paint sample Original distribution Extracted trajectory Create user defined “paint samples”: Add an arbitrary color trajectory to extracted brush texture. Numerous paint samples can be created from the original.  Increases artistic freedom and control. darklight Paint is synthesized over the region covered by the model in image space. This region is given by an ID buffer. The shaded model used as a guide. Blocks are placed so that they overlap. A “minimum error cut” is performed between blocks to minimize visual discontinuity. The color component and the texture component are generated separately then added together to produce the final image. dark Shaded modelID buffer += Color componentTexture componentFinal image Advantages: Individual frames have high quality. Disadvantages: Slow rendering time.  20 seconds to 1 minute per frame.  Due to the texture synthesis step. Animations suffer from “shower door effect”.  Results from naively re-synthesizing each frame from scratch.  A constraint can be added that requires each block to match the previous frame as much as possible.  Increases rendering time.  Does not completely eliminate the “shower door effect”. Recent advances in graphics hardware allows for the use of volume (3D) textures. A volume texture is simply a stack of 2D textures. Texture synthesis is done as a preprocessing step. The input sample is divided into 8 regions of roughly constant shade. Image Quilting is used to synthesize larger versions (512 X 512) of each region. Each of the synthesized images is then processed to ensure that it is tileable. This ensures that there are no visible seams when texture repeats over the image. A 3D texture is created by stacking the tileable images in order of increasing shade value. Horizontal and vertical texture coordinates are generated by mapping horizontal and vertical screen coordinates respectively to the interval [0, 511]. Input sample 8 synthesized tileable regions (512 X 512) 3D texture Example rendering Hardware automatically performs blending between the levels of the 3D texture. The third texture coordinate (depth) is generated by mapping the shading values of the model to the interval [0, 7]. Advantages: Almost matches quality of Image Based Texture Synthesis. Runs in real-time. Fair degree of frame-to-frame coherence. Disadvantages: Lengthy preprocessing time.  Synthesizing eight 512 X 512 textures and making each tileable may take as long as 15 minutes. Specific textures are assigned to the “important” views of the model. The user specifies which n views are important.  Every face in the model must appear in at least one of these views. This ensures that there are no gaps (unpainted regions) in the resulting image. Typically 12 – 15 views are sufficient. Image Quilting is used to generate 2D textures for each of these n views. Assume v is the first view synthesized. Some subset of the faces in v may be present in v+1. The texture associated with these faces is copied over to v + 1 and used as a guide for synthesizing the remaining faces of v+1.  This improves frame-to-frame coherence.  Texture distortion may arise as a face in v may not necessarily have the same shape or size in v+1 due to the curvature of the model. To render a particular view, weights are assigned to each of the n 2D textures based on how much the viewing direction associated with that texture differs from the current viewing direction. The highest weight is assigned to the texture that most closely matches the current viewing direction. A 3D texture is created by stacking the n 2D textures. These weights are used to blend the textures together to create the final image. Advantages: Runs in real-time. Good frame-to-frame coherence. Disadvantages: Lengthy preprocessing time.  Depends on how many views the user specifies as being “important”.  20 seconds to 1 minute for each view. There is some loss of texture quality due to the distortion necessary to fit the curvature of the model.