Sparse and Redundant Representations and Their Applications in

Slides:



Advertisements
Similar presentations
A first look Ref: Walker (Ch.2) Jyun-Ming Chen, Spring 2001
Advertisements

Compressive Sensing IT530, Lecture Notes.
1 Transportation problem The transportation problem seeks the determination of a minimum cost transportation plan for a single commodity from a number.
MMSE Estimation for Sparse Representation Modeling
Joint work with Irad Yavneh
Extensions of wavelets
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
1 2 Extreme Pathway Lengths and Reaction Participation in Genome Scale Metabolic Networks Jason A. Papin, Nathan D. Price and Bernhard Ø. Palsson.
Sparse and Overcomplete Data Representation
SRINKAGE FOR REDUNDANT REPRESENTATIONS ? Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel.
Recent Trends in Signal Representations and Their Role in Image Processing Michael Elad The CS Department The Technion – Israel Institute of technology.
A Sparse Solution of is Necessarily Unique !! Alfred M. Bruckstein, Michael Elad & Michael Zibulevsky The Computer Science Department The Technion – Israel.
Topics in MMSE Estimation for Sparse Approximation Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000,
Alfredo Nava-Tudela John J. Benedetto, advisor
KKT Practice and Second Order Conditions from Nash and Sofer
AMSC 6631 Sparse Solutions of Linear Systems of Equations and Sparse Modeling of Signals and Images: Midyear Report Alfredo Nava-Tudela John J. Benedetto,
Non Negative Matrix Factorization
Cs: compressed sensing
AN ORTHOGONAL PROJECTION
Multiresolution analysis and wavelet bases Outline : Multiresolution analysis The scaling function and scaling equation Orthogonal wavelets Biorthogonal.
Uniform discretizations: the continuum limit of consistent discretizations Jorge Pullin Horace Hearne Institute for Theoretical Physics Louisiana State.
Basis Expansions and Regularization Part II. Outline Review of Splines Wavelet Smoothing Reproducing Kernel Hilbert Spaces.
AGC DSP AGC DSP Professor A G Constantinides©1 Hilbert Spaces Linear Transformations and Least Squares: Hilbert Spaces.
Class 24: Question 1 Which of the following set of vectors is not an orthogonal set?
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
My Research in a Nut-Shell Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel Meeting with.
1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient.
Deep Feedforward Networks
Sparse and Redundant Representations and Their Applications in
Quantum Two.
Evaluate the expression ( i) + ( i) and write the result in the form a + bi. Choose the answer from the following: i i i.
Degree and Eigenvector Centrality
Chap 9. General LP problems: Duality and Infeasibility
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
Introduction to linear programming (LP): Minimization
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Sparse and Redundant Representations and Their Applications in
Chap 3. The simplex method
A Motivating Application: Sensor Array Signal Processing
Prof. Ramin Zabih Least squares fitting Prof. Ramin Zabih
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Introduction To Wavelets
Sudocodes Fast measurement and reconstruction of sparse signals
Sparse and Redundant Representations and Their Applications in
Applied Combinatorics, 4th Ed. Alan Tucker
Optimal sparse representations in general overcomplete bases
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
Homework 9 Refer to the last example.
Stability Analysis of Linear Systems
Sparse and Redundant Representations and Their Applications in
I.4 Polyhedral Theory (NW)
Maths for Signals and Systems Linear Algebra in Engineering Lectures 9, Friday 28th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
Sparse and Redundant Representations and Their Applications in
Maths for Signals and Systems Linear Algebra in Engineering Lecture 6, Friday 21st October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL.
* * Joint work with Michal Aharon Freddy Bruckstein Michael Elad
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
Lecture 8: 3D Transforms Li Zhang Spring 2008
I.4 Polyhedral Theory.
Sparse and Redundant Representations and Their Applications in
Sparse and Redundant Representations and Their Applications in
CIS 700: “algorithms for Big Data”
Sparse and Redundant Representations and Their Applications in
Non-Negative Matrix Factorization
Chapter 2. Simplex method
Progress Report Alvaro Velasquez.
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Presentation transcript:

Sparse and Redundant Representations and Their Applications in Signal and Image Processing (236862) Section 1: Introduction & Mathematical Warmup Winter Semester, 2018/2019 Michael (Miki) Elad

Meeting Plan Quick review of the material covered Answering questions from the students and getting their feedback Addressing issues raised by other learners Discussing new material Administrative issues FLIPPED CLASS

Overview of the Material Overview What This Field is All About? Take 1: A New Transform What is this field all about? Take 2: Modeling Data A Closer Look at the SparseLand Model Who Works on this and Who Are We? Several examples: Applications Leveraging this Model This Course: Scope and Style Mathematical Warm-Up Underdetermined Linear Systems & Regularization The Temptation of Convexity A Closer Look at L1 Minimization Conversion of (P1) to Linear Programming Seeking Sparse Solutions Promoting Sparse Solutions The L0 Norm and the (P0) Problem A Signal Processing Perspective

Issues Raised by Other Learners sparsity and non-linearity In the introduction video you say, “clearly, seeking the sparsest solution implies that the forward transformation is highly non-linear”. It is not immediately obvious to me what you exactly mean by this. Do you mean that the optimization problem to seek this optimal alpha is very non-convex or were you alluding to some other properties of this problem?

Issues Raised by Other Learners What about 𝐁x 2 ? Is it a convex function ? Is it strictly convex? Working with the derivative in order to explore this could be quite tedious … Graphically, this function is a cone with straight lines emerging from the origin, and as such, this function is not strictly convex This is true to 𝐁x 1 and 𝐁x ∞ as well

Issues Raised by Other Learners Why the linear term must be zero? In the ‘a closer look at l1 minimization’ of the introduction section, the proof of the second theorem says: "Looking at this resulting expression, we now wonder about the linear term – could it be non-zero? The answer is negative: If it is positive, it implies that we can choose epsilon as a small negative value, and get that the L1 of x is smaller than the L1 of x*, which is a contradiction to the optimality of x*." Why the linear term cannot be negative when epsilon is negative? I don't understand. With respect to the conclusions presented in the slide 99 of the section 1... It is not clear to me how can I choose an epsilon to null one entry in the expression x*+epsilon*h. What is the guarantee that there is such a small epsilon that when provided to the above expression allows nulls one of the entries?

Issues Raised by Other Learners

Issues Raised by Other Learners

Issues Raised by Other Learners

Issues Raised by Other Learners

Issues Raised by Other Learners

Issues Raised by Other Learners Matrix and Vector Derivatives In the first section, in the slides 73 and 85, I noticed that the derivative (gradient) of the expressions is obtained in a direct way. In practice, how can I evaluate these derivatives? Is there any software/tool to perform this task automatically?

Your Questions or Feedback

New Material? Let’s discuss Relation to wavelet theory The flaws of P0

Relation to Wavelet Theory Wavelet is a very rich topic, that could be easily fitted into a whole course (and more) The work on wavelet emerged (in the context of signal processing) in ~ 1985, led by Stephane Mallat, Ingrid Daubechies, Ives Meyer, Ronald Coifman, Albert Cohen, … Its essence : A new and highly effective transform for representing signals How? By choosing (smartly!) a mother wavelet, and creating the basis functions as shifts and dilations of it

Relation to Wavelet Theory Wavelet main features: Multiscale analysis Ability to handle transient (local) parts in the signal Shift invariance (?) Vanishing moments Sparse set of coefficients Non-linear approximation Orthogonality (or bi-orthogonality) Fast implementation … & a rich theory

Relation to Wavelet Theory What will we take from these in this course? Wavelet main features: Multiscale analysis Ability to handle transient (local) parts in the signal Shift invariance (?) Vanishing moments Sparse set of coefficients Non-linear approximation Orthogonality (or bi-orthogonality) Fast implementation … & a rich theory And we will have our own, quite rich, theory

Flaws of P0 The equality constraint Ax=b is too strict: Suppose that for a given b0 the system Ax=b0 has a sparse solution For a slightly perturbed vector b=b0+v (v is random), the system Ax=b will not have a sparse solution at all The L0-measure is too strict: Suppose that x0 is very sparse A random pertur. of it, x=x0+u (<<1 and ||u||2=1) is fully dense So, what shall we do? Wait and see …

Administrative Issues Matlab/Python Other issues?