Image transformations Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng Kung University.

Slides:



Advertisements
Similar presentations
10.4 Complex Vector Spaces.
Advertisements

Matrix Representation
Eigen Decomposition and Singular Value Decomposition
3D Geometry for Computer Graphics
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Lecture 19 Singular Value Decomposition
Symmetric Matrices and Quadratic Forms
Chapter 3 Determinants and Matrices
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
1 Neural Nets Applications Vectors and Matrices. 2/27 Outline 1. Definition of Vectors 2. Operations on Vectors 3. Linear Dependence of Vectors 4. Definition.
Dirac Notation and Spectral decomposition Michele Mosca.
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Transforms: Basis to Basis Normal Basis Hadamard Basis Basis functions Method to find coefficients (“Transform”) Inverse Transform.
Dirac Notation and Spectral decomposition
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Orthogonal Transforms
Matrices CS485/685 Computer Vision Dr. George Bebis.
Chapter 5 Determinants.
5.1 Orthogonality.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra & Matrices MfD 2004 María Asunción Fernández Seara January 21 st, 2004 “The beginnings of matrices and determinants.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Orthogonal Matrices and Spectral Representation In Section 4.3 we saw that n  n matrix A was similar to a diagonal matrix if and only if it had n linearly.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Linear Algebra and Image Processing
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
CPSC 491 Xin Liu Nov 17, Introduction Xin Liu PhD student of Dr. Rokne Contact Slides downloadable at pages.cpsc.ucalgary.ca/~liuxin.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
CS654: Digital Image Analysis Lecture 12: Separable Transforms.
SVD: Singular Value Decomposition
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
What is the determinant of What is the determinant of
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
CS654: Digital Image Analysis Lecture 13: Discrete Fourier Transformation.
Two-Dimensional Filters Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng Kung University.
1. Systems of Linear Equations and Matrices (8 Lectures) 1.1 Introduction to Systems of Linear Equations 1.2 Gaussian Elimination 1.3 Matrices and Matrix.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Chapter 13 Discrete Image Transforms
University of Ioannina - Department of Computer Science Filtering in the Frequency Domain (Circulant Matrices and Convolution) Digital Image Processing.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
References Jain (a text book?; IP per se; available)
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Review of Linear Algebra
CS479/679 Pattern Recognition Dr. George Bebis
Matrices and vector spaces
Discrete Fourier Transform The Chinese University of Hong Kong
Lecture on Linear Algebra
CS485/685 Computer Vision Dr. George Bebis
6-4 Symmetric Matrices By毛.
Discrete Fourier Transform The Chinese University of Hong Kong
Chapter 3 Linear Algebra
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
Symmetric Matrices and Quadratic Forms
Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 – 14, Tuesday 8th November 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR)
Lecture 13: Singular Value Decomposition (SVD)
Linear Vector Space and Matrix Mechanics
Subject :- Applied Mathematics
Digital Image Processing
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Symmetric Matrices and Quadratic Forms
Presentation transcript:

Image transformations Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng Kung University Last updated: 4 September 2003 Chapter 2

Introduction  Content: Tools for DIP – linear superposition of elementary images  Elementary image Outer product of two vectors uivjTuivjT  Expand an image g = h c T fh r f = (h c T ) -1 gh r -1 =  g ij u i v j T Example 2.1

Unitary matrix  Unitary matrix U U satisfies UU T* = UU H = I  T: transpose  *: conjugate  U T* = U H  Unitary transform of f h c T fh r  If h c and h r are chosen to be unitary  Inverse of a unitary transform f = (h c T ) -1 gh r -1 = h c gh r H = UgV H U  h c ; V  h r

Orthogonal matrix  Orthogonal matrix U U is an unitary matrix and its elements are all real U satisfies UU T = I  Construct an unitary matrix U is unitary if its columns form a set of orthonormal vectors

Matrix diagonalization  Diagonalize a matrix g g = U  1/2 V T  g is a matrix of rank r  U and V are orthogonal matrices of size N  r  U is made up from the eigenvectors of the matrix gg T  V is made up from the eigenvectors of the matrix g T g   1/2 is a diagonal r  r matrix Example 2.8: compute U and V from g

Singular value decomposition  SVD of an image g g =  i 1/2 u i v i T, i =1, 2, …, r  Approximate an image g k =  i 1/2 u i v i T, i =1, 2, …, k; k < r Error: D  g – g k =  i 1/2 u i v i T, i = k+1, 2, …, r ||D|| =  i, i = k+1, 2, …, r  Sum of the omitted eigenvalues Example 2.10  For an arbitrary matrix D, ||D|| = trace[D T D] = sum of all terms squared Minimizing the error  Example 2.11

Eigenimages  Eigenimages The base images used to expand the image Intrinsic to each image Determined by the image itself  By the eigenvectors of g T g and gg T Example 2.12, 2.13  Performing SVD and identify eigenimages Example 2.14  Different stages of the SVD

Complete and orthogonal set  Orthogonal A set of functions S n (t) is said to be orthogonal over an interval [0,T] with weight function w(t) if  0 T w(t)S n (t)S m (t)dt =  k if n = m  0 if n  m  Orthonormal If k = 1  Complete If we cannot find any other function which is orthogonal to the set and does not belong to the set.

Complete sets of orthonormal discrete valued functions  Harr functions Definition  Walsh functions Definition  Harr/Walsh image transformation matrices Scale the independent variable t by the size of the matrix Matrix form of H k (i), W k (i) Normalization (N -1/2 or T -1/2 )

Harr transform  Example 2.18 Harr image transformation matrix (4  4)  Example 2.19 Harr transformation of a 4  4 image  Example 2.20 Reconstruction of an image and its square error  Elementary image of Harr transformation Taking the outer product of a discretised Harr function either with itself or with another one Figure 2.3: Harr transform basis images (8  8 case)

Walsh transform  Example 2.21 Walsh image transformation matrix (4  4)  Example 2.22 Walsh transformation of a 4  4 image  Hadamard matrices An orthogonal matrix with entries only +1 and –1 Definition Walsh functions can be calculated in terms of Hadamard matrices  Kronecker or lexicographic ordering

Hadamard/Walsh transform  Elementary image of Hadamard/Walsh transformation Taking the outer product of a discretised Hadamard/Walsh function either with itself or with another one Figure 2.4: Hadamard/Walsh transform basis images (8  8 case) Example 2.23  Different stages of the Harr transform Example 2.24  Different stages of the Hadamard/Walsh transform

Assessment of the Hadamard/Walsh and Harr transform  Higher order basis images Harr: use the same basic pattern  Uniform distribution of the reconstruction error  Allow us to reconstruct with different levels of detail different parts of an image Hadamard/Walsh: approximate the image as a whole, with uniformly distributed details  Don’t take 0  Easier to implement

Discrete Fourier transform  1D DFT Definition  2D DFT Definition  Notation of DFT Slot machine  Inverse DFT Definition  Matrix form of DFT Definition

Discrete Fourier transform (cont.)  Example 2.25 DFT image transformation matrix (4  4)  Example 2.26 DFT transformation of a 4  4 image  Example 2.27 DFT image transformation matrix (8  8)  Elementary image of DFT transformation Taking the outer product between any two rows of U DFT transform basis images (8  8 case)  Figure 2.7: Real parts  Figure 2.8: Imaginary parts

Discrete Fourier transform (cont.)  Example 2.28 DFT transformation of a 4  4 image  Example 2.29 Different stages of DFT transform  Advantages of DFT Obey the convolution theorem Use very detailed basis functions  error   Disadvantage of DFT Retain n basis images requires 2n coefficients for the reconstruction

Convolution theorem  Convolution theorem Discrete 2-dimensional functions: g(n, m), w(n, m) u(n, m) =  g(n-n’, m-m’)w(n’, m’)  n’ = 0 ~ N-1  m’ = 0 ~ M-1 Periodic assumptions  g(n, m) = g(n-N, m-M) = g(n-N, m) = g(n, m-M)  w(n, m) = w(n-N, m-M) = w(n-N, m) = w(n, m-M) û(p, q) = (MN) 1/2 ĝ(p, q) ŵ(p, q)  The factor appears because we defined the discrete Fourier transform so that the direct and the inverse ones are entirely symmetric