Presentation is loading. Please wait.

Presentation is loading. Please wait.

Signal & Weight Vector Spaces

Similar presentations


Presentation on theme: "Signal & Weight Vector Spaces"— Presentation transcript:

1 Signal & Weight Vector Spaces
MATLAB Signal & Weight Vector Spaces

2 Notation Vectors in Ân. Generalized Vectors. x 1 2 n =

3 Vector Space 1. An operation called vector addition is defined such that if x Î X and y Î X then x+y Î X. 2. x + y = y + x 3. (x + y) + z = x + (y + z) 4. There is a unique vector 0 Î X, called the zero vector, such that x + 0 = x for all x Î X. 5. For each vector there is a unique vector in X, to be called (-x ), such that x + (-x ) = 0 .

4 Vector Space (Cont.) 6. An operation, called multiplication, is defined such that for all scalars a Î F, and all vectors x Î X, a x Î X. 7. For any x Î X , 1x = x (for scalar 1). 8. For any two scalars a Î F and b Î F, and any x Î X, a (bx) = (a b) x . 9. (a + b) x = a x + b x . 10. a (x + y)  = a x + a y

5 is a set of linearly independent vectors.
Linear Independence If implies that each then is a set of linearly independent vectors.

6 Example (Banana and Apple)
Let This can only be true if Therefore the vectors are independent.

7 X1 = 1+ t + t2 X2 = 2 + 2t + t2 X3 = 1 + t a1 = 1 a2 = a3 =1 aX1+aX2+aX3=0 X1 X2 X3 are linear dependent

8 Spanning a Space A subset spans a space if every vector in the space can be written as a linear combination of the vectors in the subspace.

9 Basis Vectors A set of basis vectors for the space X is a set of vectors which spans X and is linearly independent. The dimension of a vector space, Dim(X), is equal to the number of vectors in the basis set. Let X be a finite dimensional vector space, then every basis set of X has the same number of elements.

10 Two vectors x,y ÎX are orthogonal if (x,y) = 0 .
Orthogonality Two vectors x,y ÎX are orthogonal if (x,y) = 0 . Example Any vector in the p2,p3 plane is orthogonal to the weight vector.

11 Column of Numbers The vector expansion provides a meaning for
writing a vector as a column of numbers. x 1 2 n = To interpret x, we need to know what basis was used for the expansion.

12 Linear Transformations

13 Linear Transformations
A transformation consists of three parts: 1. A set of elements X = {xi}, called the domain, 2. A set of elements Y = {yi}, called the range, and 3. A rule relating each x ÎX to an element yi Î Y. A transformation is linear if: 1. For all x 1, x 2 Î X, A(x 1 + x 2 ) = A(x 1) + A(x 2 ), 2. For all x Î X, a Î Â , A(a x ) = a A(x ) .

14 Matrix Representation - (1)
Any linear transformation between two finite-dimensional vector spaces can be represented by matrix multiplication. Let {v1, v2, ..., vn} be a basis for X, and let {u1, u2, ..., um} be a basis for Y. Let A:X->Y

15 Matrix Representation - (2)
Since A is a linear operator, Since the ui are a basis for Y, (The coefficients aij will make up the matrix representation of the transformation.)

16 Matrix Representation - (3)
Because the ui are independent, a 11 12 1 n 21 22 2 m x y = This is equivalent to matrix multiplication.

17 Summary A linear transformation can be represented by matrix multiplication. To find the matrix which represents the transformation we must transform each basis vector for the domain and then expand the result in terms of the basis vectors of the range. Each of these equations gives us one column of the matrix.

18 To find the matrix we need to transform each of the basis vectors.
Example - (1) To find the matrix we need to transform each of the basis vectors. We will use the standard basis vectors for both the domain and the range.

19 examples P6.3 consider the transformation A created by a reflecting vector X in R2 about the line x1+x2=0. Find the matrix representation of this transformation relative to the standard base of R2.

20 P6-9 Consider a transformation A: R2 -> R2. Two examples of transformed vectors are given in figure. Find the matrix representation of this transformation relative to the standard basis set.

21 The matrix representation is:
Change of Basis Consider the linear transformation A:X->Y. Let {v1, v2, ..., vn} be a basis for X, and let {u1, u2, ..., um} be a basis for Y. The matrix representation is: a 11 12 1 n 21 22 2 m x y =

22 The new matrix representation is:
New Basis Sets Now let’s consider different basis sets. Let {t1, t2, ..., tn} be a basis for X, and let {w1, w2, ..., wm} be a basis for Y. The new matrix representation is: a ' 11 12 1 n 21 22 2 m x y =

23 How are A and A' related? Expand ti in terms of the original basis vectors for X. t i 1 2 n = Expand wi in terms of the original basis vectors for Y. w i 1 2 m =

24 How are A and A' related? B t 1 2 n = Similarity Transform

25 Inverse of matrix

26 Matlab for inverse matrix
Inv(a)

27 examples P6.7 Consider a transformation A: R2 -> R2, one basis set for R2 is given as V = {v1, v2}, 1. Find the matrix of the transformation A relative to the basis set V if it is given that A(v1) = v1+2v2 A(v2) = v1+v2 2. Consider a new basis set W = {w1, w2} Find the matrix of the transformation A relative to the basis set W if it is given that w1 = v1+v2 w2 = v1 – v2

28 Eigenvalues and Eigenvectors
Let A:X->X be a linear transformation. Those vectors z belong to X, which are not equal to zero, and those scalars l which satisfy A(z) = l z are called eigenvectors and eigenvalues, respectively.

29 Eigenvector An eigenvector of a given transformation represents a direction, such that any vector in that direction, when transformed, will continue to point in the same direction, but will be scaled by the eigenvalue.

30 Computing the Eigenvalues
1 z 11 21 = 21 = z No constrain on z11 For this transformation there is only one eigenvector.

31 example 6-12 P6.4 Consider the space of complex number. Let this be the vector space X, and let the basis for X be {1+j, 1-j}, let A:X->X be the conjugation operator Find the matrix of the transformation A relative to the basis set given above. Find the eigenvalue and eigenvectors of the transformation Find the matrix representation for A relative to the eigen vectors as the basis vectors

32 ex P6.8 Consider the vector space p2 of all polynomial of degree less than or equal to 2. One basis for this vector space is V={1,t,t2}, Consider the differentiation transformation D. 1. Find the matrix of the transformation D relative to the basis set V. 2. Find the eigenvalue and eigenvectors of the transformation

33 d = eig(A)

34 Diagonalization Perform a change of basis (similarity transformation) using the eigenvectors as the basis vectors. If the eigenvalues are distinct, the new matrix will be diagonal. Eigenvectors Eigenvalues B 1 A [ ] l 2 = n

35 Example Diagonal Form:

36 example P6.5 Diagonalize the following matrix

37 Performance Surfaces

38 Taylor Series Expansion
F x ( ) * d = + 1 2 - n !

39 Vector Case F x ( ) * 1 = 2 + n -

40 Matrix Form Gradient Hessian x x x x x x x x x x x x x x x x * F ( ) =
+ Ñ F ( x ) ( x x * ) x x * = 1 T + - - - ( x x * ) Ñ 2 F ( x ) ( x x * ) + 2 x x * = Gradient Hessian F x ( ) Ñ 2 1 n = F x ( ) Ñ 1 2 n =

41 Directional Derivatives
First derivative (slope) of F(x) along xi axis: (ith element of gradient) Second derivative (curvature) of F(x) along xi axis: (i,i element of Hessian) p T F x ( ) Ñ - First derivative (slope) of F(x) along vector p: T p Ñ 2 F ( x ) p Second derivative (curvature) of F(x) along vector p: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - p 2

42 Example p T F x ( ) Ñ - 1 2 =

43 Plots Directional Derivatives 1.4 1.3 x2 1.0 0.5 0.0 x2 x1 x1

44 Minima Strong Minimum The point x* is a strong minimum of F(x) if a scalar d > 0 exists, such that F(x*) < F(x* + Dx) for all Dx such that d > ||Dx|| > 0. Global Minimum The point x* is a unique global minimum of F(x) if F(x*) < F(x* + Dx) for all Dx ° 0. Weak Minimum The point x* is a weak minimum of F(x) if it is not a strong minimum, and a scalar d > 0 exists, such that F(x*) Š F(x* + Dx) for all Dx such that d > ||Dx|| > 0.

45 Scalar Example Strong Maximum Strong Minimum Global Minimum

46 Vector Example

47 First-Order Optimality Condition
x ( ) * D + Ñ T = 1 2 - For small Dx: If x* is a minimum, this implies: If then But this would imply that x* is not a minimum. Therefore Since this must be true for every Dx,

48 Second-Order Condition
If the first-order condition is satisfied (zero gradient), then A strong minimum will exist at x* if for any Dx ° 0. Therefore the Hessian matrix must be positive definite. A matrix A is positive definite if: for any z ° 0. This is a sufficient condition for optimality. A necessary condition is that the Hessian matrix be positive semidefinite. A matrix A is positive semidefinite if: for any z.

49 Example x F ( ) + = (Not a function of x in this case.)
1 2 + = (Not a function of x in this case.) To test the definiteness, check the eigenvalues of the Hessian. If the eigenvalues are all greater than zero, the Hessian is positive definite. Both eigenvalues are positive, therefore strong minimum.

50 Quadratic Functions Gradient and Hessian: (Symmetric A)
Useful properties of gradients: Gradient of Quadratic Function: Hessian of Quadratic Function:

51 Eigensystem of the Hessian
Consider a quadratic function which has a stationary point at the origin, and whose value there is zero. Perform a similarity transform on the Hessian matrix, using the eigenvalues as the new basis vectors. Since the Hessian matrix is symmetric, its eigenvectors are orthogonal. A ' B T [ ] l 1 2 n L =

52 Second Directional Derivative
p T F x ( ) Ñ 2 - A = Represent p with respect to the eigenvectors (new basis): p T A 2 - c B L ( ) l i 1 = n å l m i n p T A 2 - a x

53 Eigenvector (Largest Eigenvalue)
c B T p z m a x 1 = z m a x T A 2 - l i c 1 = n å The eigenvalues represent curvature (second derivatives) along the eigenvectors (the principal axes).

54 (Any two independent vectors in the plane would work.)
Circular Hollow (Any two independent vectors in the plane would work.)

55 Elliptical Hollow

56 Elongated Saddle F x ( ) 1 4 - 2 3 T 0.5 1.5 =

57 Stationary Valley F x ( ) 1 2 - + T =

58 Quadratic Function Summary
Stationary Point: If the eigenvalues of the Hessian matrix are all positive, the function will have a single strong minimum. If the eigenvalues are all negative, the function will have a single strong maximum. If some eigenvalues are positive and other eigenvalues are negative, the function will have a single saddle point. If the eigenvalues are all nonnegative, but some eigenvalues are zero, then the function will either have a weak minimum or will have no stationary point. If the eigenvalues are all nonpositive, but some eigenvalues are zero, then the function will either have a weak maximum or will have no stationary point.


Download ppt "Signal & Weight Vector Spaces"

Similar presentations


Ads by Google