Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.

Slides:



Advertisements
Similar presentations
Spatial Filtering (Chapter 3)
Advertisements

Topic 6 - Image Filtering - I DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Regional Processing Convolutional filters. Smoothing  Convolution can be used to achieve a variety of effects depending on the kernel.  Smoothing, or.
Dynamic Occlusion Analysis in Optical Flow Fields
1 Computer Graphics Chapter 9 Rendering. [9]-2RM Rendering Three dimensional object rendering is the set of collective processes which make the object.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision Optical Flow
Lecture 4 Edge Detection
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
Image Morphing : Rendering and Image Processing Alexei Efros.
CS443: Digital Imaging and Multimedia Filters Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Spring 2008 Ahmed Elgammal Dept.
Image Enhancement.
Processing Image and Video for An Impressionist Effect Peter Litwinowicz Apple Computer, Inc. Siggraph1997.
Painterly Rendering for Animation Barbara J. Meier Walt Disney Feature Animation SIGGRAPH 96.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Beyond Meshes Spring 2012.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Spatial Filtering: Basics
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Geometric transformations Affine transformations Forward mapping Interpolations schemes.
Processing Images and Video for an Impressionist Effect Author: Peter Litwinowicz Presented by Jing Yi Jin.
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
AdeptSight Image Processing Tools Lee Haney January 21, 2010.
Lumo: Illumination for Cel Animation Scott F. Johnston.
September 17, 2013Computer Vision Lecture 5: Image Filtering 1ColorRGB HSI.
Chapter 10, Part I.  Segmentation subdivides an image into its constituent regions or objects.  Image segmentation methods are generally based on two.
2.5D Cartoon Models SIGGRAPH 2010 Frédo DurandTakeo IgarashiAlec Rivers MIT CSAIL The University of Tokyo.
Soccer Video Analysis EE 368: Spring 2012 Kevin Cheng.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
03/28/03© 2005 University of Wisconsin NPR Today “Comprehensible Rendering of 3-D Shapes”, Takafumi Saito and Tokiichiro Takahashi, SIGGRAPH 1990 “Painterly.
Lapped Solid Textrues Filling a Model with Anisotropic Textures
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
Orientable Textures for Image- Based Pen-And-Ink Illustration Michael P. Salisbury Michael T. Wong John F. Hughes David A. Salesin SIGGRAPH 1997 Andrea.
Autonomous Robots Vision © Manfred Huber 2014.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Lecture 6 Rasterisation, Antialiasing, Texture Mapping,
CS COMPUTER GRAPHICS LABORATORY. LIST OF EXPERIMENTS 1.Implementation of Bresenhams Algorithm – Line, Circle, Ellipse. 2.Implementation of Line,
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Rick Parent - CIS681 Motion Capture Use digitized motion to animate a character.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Motion Estimation using Markov Random Fields Hrvoje Bogunović Image Processing Group Faculty of Electrical Engineering and Computing University of Zagreb.
CISC 110 Day 3 Introduction to Computer Graphics.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Painterly Rendering for Animation Introduction speaks of focus and detail –Small brush strokes focus and provide detail –Large strokes are abstract and.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
Computer Graphics CC416 Lecture 04: Bresenham Line Algorithm & Mid-point circle algorithm Dr. Manal Helal – Fall 2014.
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
Computer Graphics Chapter 9 Rendering.
A Look Into Photorealistic and Non-Photorealistic Rendering
An Adept Edge Detection Algorithm for Human Knee Osteoarthritis Images
CSE 455 HW 1 Notes.
Motion and Optical Flow
Non-Photorealistic Rendering
Computer Vision Lecture 4: Color
Fitting Curve Models to Edges
EE/CSE 576 HW 1 Notes.
EE/CSE 576 HW 1 Notes.
Visualization CSE 694L Roger Crawfis The Ohio State University.
Image Filtering Readings: Ch 5: 5. 4, 5. 5, 5. 6, , 5
ECE/CSE 576 HW 1 Notes.
Presentation transcript:

Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms for producing “painterly” pictures to create temporally coherent animations. Reduce user interaction to a bare minimum.

Methodology Overview Brush strokes are clipped to edges detected in the original image sequence and are oriented normal to the gradient direction of the original image. Scattered data interpolation is used when gradient is near zero. Brush stroke list is maintained and manipulated through the use of optical flow fields to enhance temporal coherence.

Rendering Strokes Rendered with an anti-aliased line centered at a point, with a given length, radius, and orientation. Color is determined through bilinear interpolation of the color at the point in the original image. Strokes are spaced apart by a user specified distance. Strokes are drawn using a random ordering to help achieve a hand-drawn look.

Randomizing Strokes Random amounts are assigned to the length and radius of the stroke, within a user specified range. Colors are perturbed by a random amount usually between [-15,15] for r,g,b. Colors are then scaled by an intensity usually within [.85,1.15] and then clamped to [0,255]. The orientation angle is randomized by adding a random angle amount between [-15º,15º]. All these randomizations are then stored in a data structure for each stroke and are not regenerated on a frame to frame basis.

Clipping and Rendering 1.An intensity image is created from the original color image using the equation: (30*red + 59*green + 11*b)/100 2.The intensity image is blurred using a Gaussian kernel, or a B-spline approximation to the Gaussian. 3.The resulting blurred image is Sobel filtered where the value of the Sobel filter is: Sobel(x,y) = Magnitude (Gx, Gy)

Sobel Filtering The Sobel filter consists of two kernels which detect horizontal and vertical changes in an image. The magnitude of an edge: M Sobel [x][y] = |Gh[x][y]| + |Gv[x][y]| The direction of an edge: φ Sobel [x][y] = tan -1 (Gv[x][y] / Gh[x][y])

Clipping and Rendering Con’t 4.Start the line at the center point and “grow” the line in its orientation direction until either an edge is reached or maximum length obtained. An edge is reached if the Sobel value decreases in direction of orientation that the line is growing. 5.Once we have determined the endpoints, the stroke is drawn using the perturbations stored in our stroke data structure. A linear falloff is used in a 1.0 pixel radius region ( alpha transition from 1 to 0 ).

Brush Stroke Orientation Strokes are drawn with a constant color in the gradient- normal direction. Brush strokes in a region of constant color smoothly interpolate the directions defined at the region’s boundaries. A thin-plate spline is used for the interpolation because it cannot be assumed that the data lies on a uniformly spaced grid. The gradient at the center point is bilinearly interpolated and the direction angle is computed using: Arctan(Gy/Gx) + 90º ( normal to the gradient ). The perturbations are added and the result is stored as this stroke’s direction angle.

Animating the Images Since there is no a priori information about pixel movement, vision techniques are applied to guide the brush strokes. Optical flow vector fields are utilized to determine pixel movement. Brush stroke centers are mapped to subpixel locations in the new image.

Handling Sparse Regions After using the vector flow field there may be regions which are too sparsely populated. Sparse regions don’t provide the coverage we wish to have. Use Delaunay triangulation to find triangles with area greater than maximum specified. Add pixels to reduce the area of these triangles. The Delaunay triangulation has the property that the circumcircle (circumsphere) of every triangle (tetrahedron) does not contain any points of the triangulation.

Delaunay Triangulation

Handling Dense Regions Dense regions will greatly slow down the rendering process. The edge list of the triangulation is traversed. If the distance between points is less than a user-specified distance, the stroke that is drawn closer to the back is removed. Display list of strokes determines stroke order.

Potential Problems The method of optical flow estimation employed by Litwinowicz assumes that lighting is constant and that occlusion may be ignored, resulting in objects appearing and disappearing. Stroke movement is only as good as the motion estimate technique. Noise in the images will cause noise in the animations.

Possible Enhancements and Future Work Scale the offset of the stroke based on the length of the stroke. Implement additional rendering styles to produce other artistic styles. This would involve modifying the stroke drawing algorithm to emulate the different styles. Apply this technique to 3D models where the motion of the objects in the scene would be known a priori. This could be accomplished by removing most of the vision algorithms from the system and replacing them with data structures that would hold all the necessary information for all the models of the items in the animation.