Processing Images and Video for an Impressionist Effect Author: Peter Litwinowicz Presented by Jing Yi Jin.

Slides:



Advertisements
Similar presentations
Texture Synthesis on [Arbitrary Manifold] Surfaces Presented by: Sam Z. Glassenberg* * Several slides borrowed from Wei/Levoy presentation.
Advertisements

Spatial Filtering (Chapter 3)
EDGE DETECTION ARCHANA IYER AADHAR AUTHENTICATION.
Dynamic Occlusion Analysis in Optical Flow Fields
E.G.M. PetrakisFiltering1 Linear Systems Many image processing (filtering) operations are modeled as a linear system Linear System δ(x,y) h(x,y)
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Edge Detection CSE P 576 Larry Zitnick
A New Block Based Motion Estimation with True Region Motion Field Jozef Huska & Peter Kulla EUROCON 2007 The International Conference on “Computer as a.
Lecture 4 Edge Detection
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Lapped Textures Emil Praun Adam Finkelstein Hugues Hoppe Emil Praun Adam Finkelstein Hugues Hoppe Princeton University Microsoft Research Princeton University.
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
Image Morphing : Rendering and Image Processing Alexei Efros.
Filters and Edges. Zebra convolved with Leopard.
Processing Image and Video for An Impressionist Effect Peter Litwinowicz Apple Computer, Inc. Siggraph1997.
NPR - 2D to 3D, painting and rendering Daniel Teece Walt Disney Feature Animation Daniel Teece Walt Disney Feature Animation
Painterly Rendering for Animation Barbara J. Meier Walt Disney Feature Animation SIGGRAPH 96.
Non-Photorealistic Rendering Greg Turk College of Computing and GVU Center.
Painterly Rendering for Animation – Barbara Meier
04/04/05© 2005 University of Wisconsin NPR Today "Processing Images and Video for an Impressionist Effect", Peter Litwinowicz, Proceedings of SIGGRAPH.
Painterly Rendering for Animation The author starts with the assumption that painterly rendering is necessary or desirable. Most of the Introduction is.
Neighborhood Operations
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
Multiscale Moment-Based Painterly Rendering Diego Nehab and Luiz Velho
Marching Cubes: A High Resolution 3D Surface Construction Algorithm William E. Lorenson Harvey E. Cline General Electric Company Corporate Research and.
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
Image Processing Edge detection Filtering: Noise suppresion.
Lumo: Illumination for Cel Animation Scott F. Johnston.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Non-Photorealistic Rendering and Content- Based Image Retrieval Yuan-Hao Lai Pacific Graphics (2003)
03/28/03© 2005 University of Wisconsin NPR Today “Comprehensible Rendering of 3-D Shapes”, Takafumi Saito and Tokiichiro Takahashi, SIGGRAPH 1990 “Painterly.
Many slides from Steve Seitz and Larry Zitnick
09/16/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Environment mapping Light mapping Project Goals for Stage 1.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Kylie Gorman WEEK 1-2 REVIEW. CONVERTING AN IMAGE FROM RGB TO HSV AND DISPLAY CHANNELS.
Orientable Textures for Image- Based Pen-And-Ink Illustration Michael P. Salisbury Michael T. Wong John F. Hughes David A. Salesin SIGGRAPH 1997 Andrea.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Rick Parent - CIS681 Motion Capture Use digitized motion to animate a character.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Painterly Rendering for Animation Introduction speaks of focus and detail –Small brush strokes focus and provide detail –Large strokes are abstract and.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Lecture 8: Edges and Feature Detection
Last Lecture photomatix.com. Today Image Processing: from basic concepts to latest techniques Filtering Edge detection Re-sampling and aliasing Image.
Digital Image Processing CSC331
CDS 301 Fall, 2008 Domain-Modeling Techniques Chap. 8 November 04, 2008 Jie Zhang Copyright ©
Sliding Window Filters Longin Jan Latecki October 9, 2002.
Non-linear filtering Example: Median filter Replaces pixel value by median value over neighborhood Generates no new gray levels.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Applications and Rendering pipeline
Chapter 10 Image Segmentation
A Look Into Photorealistic and Non-Photorealistic Rendering
Non-Photorealistic Rendering
Computational Photography Derek Hoiem, University of Illinois
Computer Vision Lecture 4: Color
Lecture 2: Edge detection
Edge Detection Today’s readings Cipolla and Gee Watt,
Lecture 2: Edge detection
IT472 Digital Image Processing
Presentation transcript:

Processing Images and Video for an Impressionist Effect Author: Peter Litwinowicz Presented by Jing Yi Jin

Objective  Generate the a hand-drawn animation from video clip automatically  Impressionist style  Intervention from the user in the first frame  Exploit the temporal coherence Input Output Video clip Hand-drawn impressionist style animation

Inspiration “Catch the fleeting impression of sunlight on objects. And it was this out-of-doors world he wanted to capture in paint – as it actually was at the moment of seeing it, not worked up in the studio from sketches.” --- Kringston

Advantages  Presents a process that uses optical flow fields to generate the animation  The first to produce a temporally coherent painterly animation  …  Describes a new technique to orient strokes from frame- to-frame  Uses algorithms to manage the stroke density

Structure of the presentation  Previous works Previous works  Current algorithm –Stroke rendering and clippingStroke rendering and clipping –Stroke orientationStroke orientation –AnimationAnimation  Conclusion Conclusion

Previous works  Hieberli, 90 –Computer-assisted transformation of picturesComputer-assisted transformation of pictures Extensive human interaction Specify the number, position of stroke Orientation, size, color of stroke controlled in an interactive or non-interactive way Static images only Difficult in extend it to deal with a sequence of images Inspiration: modify this approach to produce temporal coherent animation

Previous works (2)  Salisbury, 94 and 96 –Pen-ink pattern –Picture controlled either in an interactive or non- interactive way Static image only Temporally coherence not straightforward Perceived edge preserved

Previous works (3)  Hsu, 94 –“skeletal strokes” –Skeletal strokes are used to produce 2-1/2 D animation All animation is key-framed by the user

Previous works (4)  Meier, 96 –Transforming 3D geometry into animations Temporal coherence is both interesting and important Inspiration: Video sequence as the input

Rendering strokes  Generate strokes that cover the output image

Rendering strokes  Stroke – an antialiased line with –Center at (cx, cy) –Length length –Thickness radius –Orientation theta

Rendering strokes –User-defined initial spacing distance –Bilinearly interpolated color of the original image at (cx, cy) –Color range [0,255] –Randomized stroke order (cx,cy)

Rendering strokes  Random perturbations –Assign  length to length –  radius to radius –Perturb color by  r,  g,  b, each in the range [-15, 15] –Scale the perturbed color by  intensity, in the range [.85, 1.15] –Clamp the resulted color to [0,255] –Perturb theta by  theta in the range [-15°, 15°] –All the information is stored in a data structure

Clipping and rendering  To preserve detail and silhouettes  Inspired by Salisbury 94 – strokes are clipped to the edge provided by user  No user interaction  Image processing techniques to locate edges

Clipping and rendering

 Algorithm: 1.Derive an intensity image: (30*r+59*g+11*b)/100 2.Blur the intensity image with a Gaussian kernel – Reduce noise –Larger kernel  lost of detail –Smaller kernel  retain noise – Kernel width specified by the user Kernel with the radius of 11

Clipping and rendering 3.Filter the resulting image by Sobel filter: Sobel(x,y) = Magnitude (Gx, Gy) where (Gx,Gy) = [ dI(x,y)/dx, dI(x,y)/dy ]

Clipping and rendering 4.Determine the endpoints (x1, y1) and (x2, y2) – Starts at (cx, cy) – “Grows” the line in its orientation until: –The maximum length is reached or –An edge is detected in the smoothed image – Edge is found if the Sobel value decreases in the direction the stroke is being grown – Similar to the edge process used in the Canny operator

Clipping and rendering 5.Stroke is rendered with endpoints (x1,y1) and (x2,y2) – Assign the original color at (cx,cy) to the stroke – Perturb and clamp it – Use a linear falloff in a 1.0 pixel radius region – A stroke will be drawn even it’s surrounded by edges

Clipping and rendering  Using brush textures –Render brush strokes with textured brush images –Construct a rectangle surrounding the clipped line with a given offset –Current approach: fixed offset –Proposed approach: scale the offset based on the length

Clipping and rendering

Brush stroke orientation  Provide the option of drawing in the direction of (near) constant color  Drawing strokes normal to the gradient direction (of the intensity image) –Gradient direction  most change –Normal to gradient  0 change  Gaussian kernel used for gradient calculation

Brush stroke orientation  In the regions of constant color, interpolate the directions defined at the region’s boundaries –“Throw out” the gradients when |Gx|<3.0 or |Gy|<3.0 –Interpolate the surrounding directions by thin-plate spline  At each (cx,cy), the modified gradient (Gx,Gy) are bilinearly interpolated   theta is added to theta

Brush stroke orientation Gaussian filter to calculate the gradient Interpolate the gradient if |Gx|<3.0 or |Gy|<3.0 Bilinerly Interpolate the modified gradient Add  theta to theta

Brush stroke orientation  Result: –The method causes strokes to look glued to objects –Much better than keeping the orientation in the same direction –The user has both options

Frame-to-Frame coherence  In Meier: –“particles” on 3D as the center of stroke –The surface normal on 3D was used as guide for brush orientation  Video clip as an input => no a priori information about pixel movement  The process: –First frame  Process described previously –Next frames  Calculate the optical flow vector field (A subclass of motion estimation technique) between two images –Constant illumination –Occlusion can be ignored (cx,cy)

Frame-to-Frame coherence

 Problems: –Boundaries unnecessarily dense –Regions not dense enough  Solution: –Delaunay triangulation

Frame-to-Frame coherence  Delaunay triangulation –Covers the convex hull with triangles –Find triangle that satisfy the maximum area constrain

Frame-to-Frame coherence  Generate new strokes –Subdivide the mesh until there is no triangle with an area > maximum specified –Use new vertices as new stroke centers –Generate its length, color, angle, intensity as in the first frame –Add random amounts  Eliminate strokes in a dense region –Distance between 2 strokes is less than a user-specified length –Update the stroke by performing distance calculation with the replaced point

Frame-to-Frame coherence  Two lists of brush strokes: –Old ones: previous frame –New ones: generated in sparse regions  Randomize the new strokes order – uniformly distribute them  What if the new strokes are always drawn behind the old ones? clear edge X temporal scintillating

Discussion  Time to produce each frame averaged 81 seconds on a Macintosh 8500 running at 180 MHz –Brush radii in the range [ ] –76800 (640/2*480/2) strokes initially –120,000 strokes in average  Important step in automatically produce temporal coherent “painterly” animations Order of new strokes  scintillation Presence of noise  scintillation Placement from frame to frame not ideal (limited by the lack of knowledge of the scene)

Discussion  For the first time temporal coherence is used to drive the brush stroke placement  Applying the technique to 3D objects would be interesting –Enable animation with greater temporal coherence