Signal & Weight Vector Spaces

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

10.4 Complex Vector Spaces.
Matrix Representation
5.1 Real Vector Spaces.
3D Geometry for Computer Graphics
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
D Nagesh Kumar, IIScOptimization Methods: M2L2 1 Optimization using Calculus Convexity and Concavity of Functions of One and Two Variables.
Signal , Weight Vector Spaces and Linear Transformations
Signal , Weight Vector Spaces and Linear Transformations
Linear Transformations
THE DIMENSION OF A VECTOR SPACE
Computer Graphics Recitation 5.
Chapter 3 Determinants and Matrices
18 1 Hopfield Network Hopfield Model 18 3 Equations of Operation n i - input voltage to the ith amplifier a i - output voltage of the ith amplifier.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
資訊科學數學11 : Linear Equation and Matrices
Matrices CS485/685 Computer Vision Dr. George Bebis.
Stats & Linear Models.
5.1 Orthogonality.
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Linear Algebra Chapter 4 Vector Spaces.
Elementary Linear Algebra Anton & Rorres, 9th Edition
CHAP 0 MATHEMATICAL PRELIMINARY
4 4.2 © 2012 Pearson Education, Inc. Vector Spaces NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
6 1 Linear Transformations. 6 2 Hopfield Network Questions The network output is repeatedly multiplied by the weight matrix W. What is the effect of this.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Chap. 6 Linear Transformations
Linear algebra: matrix Eigen-value Problems
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
Chapter 4 – Linear Spaces
CHAPTER 10 Widrow-Hoff Learning Ming-Feng Yeh.
Chapter 6 Eigenvalues. Example In a certain town, 30 percent of the married women get divorced each year and 20 percent of the single women get married.
Optimality Conditions for Unconstrained optimization One dimensional optimization –Necessary and sufficient conditions Multidimensional optimization –Classification.
Performance Surfaces.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Introduction to Optimization
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
4.5: The Dimension of a Vector Space. Theorem 9 If a vector space V has a basis, then any set in V containing more than n vectors must be linearly dependent.
4 4.5 © 2016 Pearson Education, Inc. Vector Spaces THE DIMENSION OF A VECTOR SPACE.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Reduced echelon form Matrix equations Null space Range Determinant Invertibility Similar matrices Eigenvalues Eigenvectors Diagonabilty Power.
Beyond Vectors Hung-yi Lee. Introduction Many things can be considered as “vectors”. E.g. a function can be regarded as a vector We can apply the concept.
REVIEW Linear Combinations Given vectors and given scalars
CS479/679 Pattern Recognition Dr. George Bebis
Matrices and vector spaces
Computational Optimization
Matrices and Vectors Review Objective
Properties Of the Quadratic Performance Surface
CS485/685 Computer Vision Dr. George Bebis
Numerical Analysis Lecture 16.
Hopfield Network.
Signal & Weight Vector Spaces
Chapter 3 Linear Algebra
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Signal & Weight Vector Spaces
Linear Transformations
I.4 Polyhedral Theory (NW)
I.4 Polyhedral Theory.
Performance Surfaces.
Eigenvalues and Eigenvectors
Subject :- Applied Mathematics
Linear Transformations
NULL SPACES, COLUMN SPACES, AND LINEAR TRANSFORMATIONS
Presentation transcript:

Signal & Weight Vector Spaces MATLAB Signal & Weight Vector Spaces

Notation Vectors in Ân. Generalized Vectors. x 1 2 n =

Vector Space 1. An operation called vector addition is defined such that if x Î X and y Î X then x+y Î X. 2. x + y = y + x 3. (x + y) + z = x + (y + z) 4. There is a unique vector 0 Î X, called the zero vector, such that x + 0 = x for all x Î X. 5. For each vector there is a unique vector in X, to be called (-x ), such that x + (-x ) = 0 .

Vector Space (Cont.) 6. An operation, called multiplication, is defined such that for all scalars a Î F, and all vectors x Î X, a x Î X. 7. For any x Î X , 1x = x (for scalar 1). 8. For any two scalars a Î F and b Î F, and any x Î X, a (bx) = (a b) x . 9. (a + b) x = a x + b x . 10. a (x + y)  = a x + a y

is a set of linearly independent vectors. Linear Independence If implies that each then is a set of linearly independent vectors.

Example (Banana and Apple) Let This can only be true if Therefore the vectors are independent.

X1 = 1+ t + t2 X2 = 2 + 2t + t2 X3 = 1 + t a1 = 1 a2 = -1 a3 =1 aX1+aX2+aX3=0 X1 X2 X3 are linear dependent

Spanning a Space A subset spans a space if every vector in the space can be written as a linear combination of the vectors in the subspace.

Basis Vectors A set of basis vectors for the space X is a set of vectors which spans X and is linearly independent. The dimension of a vector space, Dim(X), is equal to the number of vectors in the basis set. Let X be a finite dimensional vector space, then every basis set of X has the same number of elements.

Two vectors x,y ÎX are orthogonal if (x,y) = 0 . Orthogonality Two vectors x,y ÎX are orthogonal if (x,y) = 0 . Example Any vector in the p2,p3 plane is orthogonal to the weight vector.

Column of Numbers The vector expansion provides a meaning for writing a vector as a column of numbers. x 1 2 n = To interpret x, we need to know what basis was used for the expansion.

Linear Transformations

Linear Transformations A transformation consists of three parts: 1. A set of elements X = {xi}, called the domain, 2. A set of elements Y = {yi}, called the range, and 3. A rule relating each x ÎX to an element yi Î Y. A transformation is linear if: 1. For all x 1, x 2 Î X, A(x 1 + x 2 ) = A(x 1) + A(x 2 ), 2. For all x Î X, a Î Â , A(a x ) = a A(x ) .

Matrix Representation - (1) Any linear transformation between two finite-dimensional vector spaces can be represented by matrix multiplication. Let {v1, v2, ..., vn} be a basis for X, and let {u1, u2, ..., um} be a basis for Y. Let A:X->Y

Matrix Representation - (2) Since A is a linear operator, Since the ui are a basis for Y, (The coefficients aij will make up the matrix representation of the transformation.)

Matrix Representation - (3) Because the ui are independent, a 11 12 ¼ 1 n 21 22 2 m x y = This is equivalent to matrix multiplication.

Summary A linear transformation can be represented by matrix multiplication. To find the matrix which represents the transformation we must transform each basis vector for the domain and then expand the result in terms of the basis vectors of the range. Each of these equations gives us one column of the matrix.

To find the matrix we need to transform each of the basis vectors. Example - (1) To find the matrix we need to transform each of the basis vectors. We will use the standard basis vectors for both the domain and the range.

examples P6.3 consider the transformation A created by a reflecting vector X in R2 about the line x1+x2=0. Find the matrix representation of this transformation relative to the standard base of R2.

P6-9 Consider a transformation A: R2 -> R2. Two examples of transformed vectors are given in figure. Find the matrix representation of this transformation relative to the standard basis set.

The matrix representation is: Change of Basis Consider the linear transformation A:X->Y. Let {v1, v2, ..., vn} be a basis for X, and let {u1, u2, ..., um} be a basis for Y. The matrix representation is: … a 11 12 1 n 21 22 2 m x y =

The new matrix representation is: New Basis Sets Now let’s consider different basis sets. Let {t1, t2, ..., tn} be a basis for X, and let {w1, w2, ..., wm} be a basis for Y. The new matrix representation is: … a ' 11 12 1 n 21 22 2 m x y =

How are A and A' related? Expand ti in terms of the original basis vectors for X. … t i 1 2 n = Expand wi in terms of the original basis vectors for Y. … w i 1 2 m =

How are A and A' related? B t 1 2 … n = Similarity Transform

Inverse of matrix

Matlab for inverse matrix Inv(a)

examples P6.7 Consider a transformation A: R2 -> R2, one basis set for R2 is given as V = {v1, v2}, 1. Find the matrix of the transformation A relative to the basis set V if it is given that A(v1) = v1+2v2 A(v2) = v1+v2 2. Consider a new basis set W = {w1, w2} Find the matrix of the transformation A relative to the basis set W if it is given that w1 = v1+v2 w2 = v1 – v2

Eigenvalues and Eigenvectors Let A:X->X be a linear transformation. Those vectors z belong to X, which are not equal to zero, and those scalars l which satisfy A(z) = l z are called eigenvectors and eigenvalues, respectively.

Eigenvector An eigenvector of a given transformation represents a direction, such that any vector in that direction, when transformed, will continue to point in the same direction, but will be scaled by the eigenvalue.

Computing the Eigenvalues 1 z 11 21 = 21 = z No constrain on z11 For this transformation there is only one eigenvector.

example 6-12 P6.4 Consider the space of complex number. Let this be the vector space X, and let the basis for X be {1+j, 1-j}, let A:X->X be the conjugation operator Find the matrix of the transformation A relative to the basis set given above. Find the eigenvalue and eigenvectors of the transformation Find the matrix representation for A relative to the eigen vectors as the basis vectors

ex P6.8 Consider the vector space p2 of all polynomial of degree less than or equal to 2. One basis for this vector space is V={1,t,t2}, Consider the differentiation transformation D. 1. Find the matrix of the transformation D relative to the basis set V. 2. Find the eigenvalue and eigenvectors of the transformation

d = eig(A)

Diagonalization Perform a change of basis (similarity transformation) using the eigenvectors as the basis vectors. If the eigenvalues are distinct, the new matrix will be diagonal. Eigenvectors Eigenvalues … B 1 – A [ ] l 2 ¼ = n

Example Diagonal Form:

example P6.5 Diagonalize the following matrix

Performance Surfaces

Taylor Series Expansion F x ( ) * d = – + 1 2 - ¼ n !

Vector Case F x ( ) * 1 ¶ = – 2 + ¼ n -

Matrix Form Gradient Hessian x x x x x x x x x x x x x x x x * F ( ) = + Ñ F ( x ) ( x – x * ) x x * = 1 T + - - - ( x – x * ) Ñ 2 F ( x ) ( x – x * ) + ¼ 2 x x * = Gradient Hessian F x ( ) Ñ 2 1 ¶ ¼ n = F x ( ) Ñ 1 ¶ 2 ¼ n =

Directional Derivatives First derivative (slope) of F(x) along xi axis: (ith element of gradient) Second derivative (curvature) of F(x) along xi axis: (i,i element of Hessian) p T F x ( ) Ñ - First derivative (slope) of F(x) along vector p: T p Ñ 2 F ( x ) p Second derivative (curvature) of F(x) along vector p: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - p 2

Example p T F x ( ) Ñ - 1 – 2 =

Plots Directional Derivatives 1.4 1.3 x2 1.0 0.5 0.0 x2 x1 x1

Minima Strong Minimum The point x* is a strong minimum of F(x) if a scalar d > 0 exists, such that F(x*) < F(x* + Dx) for all Dx such that d > ||Dx|| > 0. Global Minimum The point x* is a unique global minimum of F(x) if F(x*) < F(x* + Dx) for all Dx ° 0. Weak Minimum The point x* is a weak minimum of F(x) if it is not a strong minimum, and a scalar d > 0 exists, such that F(x*) Š F(x* + Dx) for all Dx such that d > ||Dx|| > 0.

Scalar Example Strong Maximum Strong Minimum Global Minimum

Vector Example

First-Order Optimality Condition x ( ) * D + Ñ T = 1 2 - ¼ For small Dx: If x* is a minimum, this implies: If then But this would imply that x* is not a minimum. Therefore Since this must be true for every Dx,

Second-Order Condition If the first-order condition is satisfied (zero gradient), then A strong minimum will exist at x* if for any Dx ° 0. Therefore the Hessian matrix must be positive definite. A matrix A is positive definite if: for any z ° 0. This is a sufficient condition for optimality. A necessary condition is that the Hessian matrix be positive semidefinite. A matrix A is positive semidefinite if: for any z.

Example x F ( ) + = (Not a function of x in this case.) 1 2 + = (Not a function of x in this case.) To test the definiteness, check the eigenvalues of the Hessian. If the eigenvalues are all greater than zero, the Hessian is positive definite. Both eigenvalues are positive, therefore strong minimum.

Quadratic Functions Gradient and Hessian: (Symmetric A) Useful properties of gradients: Gradient of Quadratic Function: Hessian of Quadratic Function:

Eigensystem of the Hessian Consider a quadratic function which has a stationary point at the origin, and whose value there is zero. Perform a similarity transform on the Hessian matrix, using the eigenvalues as the new basis vectors. Since the Hessian matrix is symmetric, its eigenvectors are orthogonal. A ' B T [ ] l 1 ¼ 2 n L =

Second Directional Derivative p T F x ( ) Ñ 2 - A = Represent p with respect to the eigenvectors (new basis): p T A 2 - c B L ( ) l i 1 = n å l m i n p T A 2 - a x £

Eigenvector (Largest Eigenvalue) ¼ c B T p z m a x 1 = z m a x T A 2 - l i c 1 = n å The eigenvalues represent curvature (second derivatives) along the eigenvectors (the principal axes).

(Any two independent vectors in the plane would work.) Circular Hollow (Any two independent vectors in the plane would work.)

Elliptical Hollow

Elongated Saddle F x ( ) 1 4 - 2 – 3 T 0.5 1.5 =

Stationary Valley F x ( ) 1 2 - – + T =

Quadratic Function Summary Stationary Point: If the eigenvalues of the Hessian matrix are all positive, the function will have a single strong minimum. If the eigenvalues are all negative, the function will have a single strong maximum. If some eigenvalues are positive and other eigenvalues are negative, the function will have a single saddle point. If the eigenvalues are all nonnegative, but some eigenvalues are zero, then the function will either have a weak minimum or will have no stationary point. If the eigenvalues are all nonpositive, but some eigenvalues are zero, then the function will either have a weak maximum or will have no stationary point.