Non-Photorealistic Rendering and Content- Based Image Retrieval Yuan-Hao Lai Pacific Graphics (2003)

Slides:



Advertisements
Similar presentations
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Advertisements

Image Retrieval: Current Techniques, Promising Directions, and Open Issues Yong Rui, Thomas Huang and Shih-Fu Chang Published in the Journal of Visual.
Efficient access to TIN Regular square grid TIN Efficient access to TIN Let q := (x, y) be a point. We want to estimate an elevation at a point q: 1. should.
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
SOFT SCISSORS: AN INTERACTIVE TOOL FOR REALTIME HIGH QUALITY MATTING International Conference on Computer Graphics and Interactive Techniques ACM SIGGRAPH.
Presented By: Vennela Sunnam
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
Chapter 8 Content-Based Image Retrieval. Query By Keyword: Some textual attributes (keywords) should be maintained for each image. The image can be indexed.
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
Complex Feature Recognition: A Bayesian Approach for Learning to Recognize Objects by Paul A. Viola Presented By: Emrah Ceyhan Divin Proothi Sherwin Shaidee.
Multiple People Detection and Tracking with Occlusion Presenter: Feifei Huo Supervisor: Dr. Emile A. Hendriks Dr. A. H. J. Stijn Oomes Information and.
GrabCut Interactive Image (and Stereo) Segmentation Joon Jae Lee Keimyung University Welcome. I will present Grabcut – an Interactive tool for foreground.
Image Segmentation some examples Zhiqiang wang
Ghunhui Gu, Joseph J. Lim, Pablo Arbeláez, Jitendra Malik University of California at Berkeley Berkeley, CA
Content Based Image Clustering and Image Retrieval Using Multiple Instance Learning Using Multiple Instance Learning Xin Chen Advisor: Chengcui Zhang Department.
Shape from Contours and Multiple Stereo A Hierarchical, Mesh-Based Approach Hendrik Kück, Wolfgang Heidrich, Christian Vogelgsang.
Young Deok Chun, Nam Chul Kim, Member, IEEE, and Ick Hoon Jang, Member, IEEE IEEE TRANSACTIONS ON MULTIMEDIA,OCTOBER 2008.
Thresholding Otsu’s Thresholding Method Threshold Detection Methods Optimal Thresholding Multi-Spectral Thresholding 6.2. Edge-based.
1 Content Based Image Retrieval Using MPEG-7 Dominant Color Descriptor Student: Mr. Ka-Man Wong Supervisor: Dr. Lai-Man Po MPhil Examination Department.
Morris LeBlanc.  Why Image Retrieval is Hard?  Problems with Image Retrieval  Support Vector Machines  Active Learning  Image Processing ◦ Texture.
Image Search Presented by: Samantha Mahindrakar Diti Gandhi.
Segmentation and Clustering. Segmentation: Divide image into regions of similar contentsSegmentation: Divide image into regions of similar contents Clustering:
Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Relevance Feedback based on Parameter Estimation of Target Distribution K. C. Sia and Irwin King Department of Computer Science & Engineering The Chinese.
Video Google: Text Retrieval Approach to Object Matching in Videos Authors: Josef Sivic and Andrew Zisserman University of Oxford ICCV 2003.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
Texture Readings: Ch 7: all of it plus Carson paper
1 An Empirical Study on Large-Scale Content-Based Image Retrieval Group Meeting Presented by Wyman
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
1 A Bayesian Method for Guessing the Extreme Values in a Data Set Mingxi Wu, Chris Jermaine University of Florida September 2007.
Content-Based Image Retrieval
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
A 3D Model Alignment and Retrieval System Ding-Yun Chen and Ming Ouhyoung.
CS654: Digital Image Analysis
MTA SzTAKI & Veszprém University (Hungary) Guests at INRIA, Sophia Antipolis, 2000 and 2001 Paintbrush Rendering of Images Tamás Szirányi.
Computer Graphics and Image Processing (CIS-601).
Mixture of Gaussians This is a probability distribution for random variables or N-D vectors such as… –intensity of an object in a gray scale image –color.
2005/12/021 Content-Based Image Retrieval Using Grey Relational Analysis Dept. of Computer Engineering Tatung University Presenter: Tienwei Tsai ( 蔡殿偉.
2005/12/021 Fast Image Retrieval Using Low Frequency DCT Coefficients Dept. of Computer Engineering Tatung University Presenter: Yo-Ping Huang ( 黃有評 )
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
Simulated Annealing G.Anuradha.
Image Emotional Semantic Query Based On Color Semantic Description Wei-Ning Wang, Ying-Lin Yu Department of Electronic and Information Engineering, South.
Gaussian Mixture Models and Expectation-Maximization Algorithm.
Chapter 13 (Prototype Methods and Nearest-Neighbors )
L10 – Map labeling algorithms NGEN06(TEK230) – Algorithms in Geographical Information Systems L10- Map labeling algorithms by: Sadegh Jamali (source: Lecture.
Supplementary Slides. More experimental results MPHSM already push out many irrelevant images Query image QHDM result, 4 of 36 ground truth found ANMRR=
CSE 185 Introduction to Computer Vision Feature Matching.
Yixin Chen and James Z. Wang The Pennsylvania State University
Relevance Feedback in Image Retrieval System: A Survey Tao Huang Lin Luo Chengcui Zhang.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
Contrast-Enhanced Black and White Images Hua Li and David Mould UNC Wilmington and Carleton University Presented by Ling Xu
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
Learning and Removing Cast Shadows through a Multidistribution Approach Nicolas Martel-Brisson, Andre Zaccarin IEEE TRANSACTIONS ON PATTERN ANALYSIS AND.
Graphcut Textures:Image and Video Synthesis Using Graph Cuts
Image Segmentation Techniques
Content-Based Image Retrieval
Content-Based Image Retrieval
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
“grabcut”- Interactive Foreground Extraction using Iterated Graph Cuts
Handwritten Characters Recognition Based on an HMM Model
A Block Based MAP Segmentation for Image Compression
EM Algorithm and its Applications
Presentation transcript:

Non-Photorealistic Rendering and Content- Based Image Retrieval Yuan-Hao Lai Pacific Graphics (2003)

[Problems of CBIR] Which low-level features is the best to measure the similarity of images Color is important in human perception but histogram cannot provide spatial distribution of colors

[How do humans interpret an image] A talented painter will give a painted interpretation of the world Plain surfaces paint with greater strokes Provides information about both color and structural properties

[Stochastic Paintbrush Transformation] Based on a random searching to insert brush-strokes into a generated image at decreasing scale of brush-sizes, without predefined models or interaction. All brush shape are in rectangle

[Modified SPT] To improved in several places for CBIR Stroke color of any size is majority-vote in the stroke area of the original image Use Simulated Annealing for our optimization problem (painting process)

[Simulated Annealing] For global optimization problem, locate a good approximation Each step replaces current solution by a random "nearby" solution, chosen by a probability with global parameter T (temperature)

[Simulated Annealing Algorithm] Initial state, energy, "best" solution While time left & not good enough Pick some neighbor Compute its energy Random decide with T to move to new solution If this is a new "best" solution Then save new neighbor to best found Return best

[SPT Algorithm ] 1. Set to next brush size and initialize T 0 2. Produce distortion map (Δn = |L - Pn|) 3. Produce εn (smoothed version of Δn) 4. Randomly choose s, Φ, such that εn(s)≧ ϵ (Distortion at s is high enough) Set C to majority vote inside new stroke

[SPT Algorithm ] 5. Compute new distortion D'. It is accepted with a probability min{1,(D/D') 1/Tn ) 6. Update Δn. If stroke number < threshold, then go to step T n+1 =0.8T n, n=n+1. If average Δn in last 10 iteration < threshold δ, go to step 3 8. Go to step 1 until smallest brush is over

[SPT Algorithm (Variables)] D : Distortion (stroke area⇔image) D' : Distortion (stroke area⇔new stroke) Tn : Simulated Annealing temperature Pn : Current Painting L : Original image Δn : Distortion-map (Δn = |L - Pn|) ε : Error image (smoothed version of Δn) s : Brush position Φ : Brush orientation C : Brush color d : Brush size

[Why use SPT render for CBIR?] Transformation to brush parameters Keep sharp edges, remove details below limit Every part painted by largest possible brush Stroke orientation ⇔ Structural properties No human intervention or pre-processing

[Similarity Value Algorithm] 1. Pick same size s 1,s 2 from S 1,S 2 2. Produce sim col (s 1,s 2 ), sim ori (s 1,s 2 ), sim pos (s 1,s 2 ) 3. Sum with weights w col, w ori, w pos as sim(I 1,I 2 ) 4. Repeat until running out of same size stroke 5. n=remaining same size stroke of S 1,S 2 6. Adjust sim(I 1,I 2 ) by n

[Similarity Value Algorithm (Variables)] Use CIE-L*u*v* color space I 1, I 2 : Original Images S 1, S 2 : Stroke Sequences s 1, s 2 : single stroke sim(I 1, I 2 ) : similarity value

[Semantic Measurement Algorithm] BLOBWORLD was applied to obtain regions. Each stroke group corresponds to a region Centroids of the strokes in a group are all located in the corresponded region

[Blobworld Segmentation]

Find particular objects (things), not low- level features (stuff) Each region has its feature description User can specify the importance of region in query image

[Blobworld Segmentation Steps] 1. Extract color, texture, and position features for each pixel. 2. Group into regions by distribution of features with a mixture of Gaussians using Expectation-Maximization. 3. Describe the color distribution and texture of each region for use in a query.

[Blobworld Segmentation]

[Querying in Blobworld] Atomic query – Particular blob to match Compound query – conjunction/disjunction of queries – "like-blob-1" and "like-blob-2"

[Blobworld Compound Query Algorithm ] For each blob b j in DB image (feature vector v j ): Mahalanobis from v i to v j : d ij =(…v i -v j ) T Σ(…v i - v j ) Similarity between two blobs μ ij =e -d ij /2 (1 means blobs are same in all features) Take u i =max j u ij

[Semantic Measurement Algorithm] Until all regions in Q been selected as foreground Until all regions in I been selected as foreground Select next region in Q as foreground Select next region in I as foreground Similarity S f (foreground), S b (background) Similarity S = (2/3)S f +(1/3)S b Choose max S between Q and I

[Implementation and Experiment Study] 1,017 images (human portraits, natural scene, city scene, etc.) Resize in bound of 256×256, keep original ratio 3 stage SPT, with brush size of 24×8, 12×4, 6×2 Similarity weight w col =0.4,w ori =0.35,w pos =0.25 Ground truth produced by manual classification Similarity judged by user

[Implementation and Experiment Study] On a Pentium III 930MHz PC <500msec to decide two images are similar or not 300sec on indexing 256×256 images 10sec on 64×64 thumbnail, 30sec on 128×128 – Retrieval quality is the same!

[Conclusion] NPR+CBIR New image similarity measure – Brush-stroke parameters as features – Computed by matching strokes Higher retrieval rate compared to color or texture based features.

[Limitation and Future issue] More index time and use more CPU Orientation variant – Shift orientation histograms image 8 times and choose the maximum value? Try other NPR methods

Thank You.