Multiscale Moment-Based Painterly Rendering Diego Nehab and Luiz Velho
Overview of presentation Introduction –Moment-based painterly rendering Original contributions –Multiscale approach –Parametrized dithering –Image abstraction Results, conclusions and future work
Review of MBPR Goal: automatically create painting-like images from digital photographs Proceed as an artist who progressively strokes a canvas Each stroke approximates a neighborhood of the input image First step: Analyze input image and compute stroke list Second step: Blend strokes together to produce final image
Analysis step Determine stroke distribution –More strokes close to high frequencies –Do not allow gaps larger than stroke size Compute parameters for each stroke –Color is given by input color at position –Remaining parameters come from image- moment theory
Stroke distribution Stroke area image –For each pixel, shows area of stroke at position –Dark values correspond to small strokes… –...which in turn correspond to high frequencies Stroke positions image –Carefully dithered version of stroke are image –Density inversely proportional to stroke areas –No large empty regions
Stroke area Image Dark regions mean smaller strokes, or higher frequencies Size of neighborhoods being considered determine range of frequencies captured
Stroke positions image High frequencies yield more strokes No holes larger than neighborhood size
Stroke parameters Position within neighborhood Width and Length Orientation Color Template alpha map is fixed throughout (x c, y c ) L W
Color distance Image Given a color and a neighborhood, compute distance from color to that of each pixel Captures the shape of the stroke
Computing stroke parameters Color is pixel color at neighborhood center Remaining parameters correspond to a rectangle similar to color difference image
Synthesis step Blend stroke list together to produce final painted image
What to improve? Stroke sizes do not vary all that much –Real color difference images are not high contrast Large features must be composed by many strokes –Those that are larger than the neighborhood size Too many strokes used to cover all image Stroke distribution end up being too uniform
How to improve? Capture strokes at several different resolutions –How to prevent high-res strokes from completely overwriting low-res strokes? Use a parametrized dithering algorithm –Hi-res strokes gradually concentrate only on edges
Multi-resolution Use a pyramid of resolutions to capture strokes on wider frequency range Blend hi-res levels on top of low-res levels
Parametrized dithering Transform area value before dithering Diffuse error randomly in all directions Parameter e enhances values close to edges Parameter s controls stroke spreading limit Both parameters are changed within levels
2425 strokes2453 strokes5771 strokes10294 strokes Varying the parameters Empirical formulas adjust dithering parameters as a function of resolution
Comparison singlescale strokes
Comparison multiscale strokes
Operations performed are: –rotation, scaling and blending –color difference image, stroke are image Performed over small neighborhoods Requirements are: –Avoid copy operations –Avoid memory allocation –General enough to be used always –As simple as possible Image abstraction
Simple structure Neighborhood representation is uniform, and shares buffer with original image All graphics primitives operates equally in images and neighborhoods Clipping logic is isolated in only one function No copies needed
Results: gallery
Conclusions Multiscale approach can produce images with less strokes and wider frequency range Parametrized dithering algorithm provides better control over stroke distribution Image abstraction provides good performance and simplifies code
Future work Let low-res levels contribution influence stroke parameter computation for higher levels Can we achieve photo-realism, or perhaps use ideas to compact image? Explore coherence in stroke lists to help NPR animations