Solving a system of equations 1 x 1 + 3 x 2 + 6 x 3 = 3 2 x 1 + 4 x 2 + 7 x 3 = 2 3 x 1 + 5 x 2 + 8 x 3 = 3.

Slides:



Advertisements
Similar presentations
1 LP Duality Lecture 13: Feb Min-Max Theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum.
Advertisements

Example 1 Matrix Solution of Linear Systems Chapter 7.2 Use matrix row operations to solve the system of equations  2009 PBLPathways.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 8 Tuesday, 11/19/02 Linear Programming.
C&O 355 Mathematical Programming Fall 2010 Lecture 15 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Solving Equations = 4x – 5(6x – 10) -132 = 4x – 30x = -26x = -26x 7 = x.
CSCI 3160 Design and Analysis of Algorithms Tutorial 6 Fei Chen.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 9 Wednesday, 11/15/06 Linear Programming.
Linear Programming and Approximation
Introduction to Linear and Integer Programming Lecture 7: Feb 1.
7(2) THE DUAL THEOREMS Primal ProblemDual Problem b is not assumed to be non-negative.
Polyhedral Optimization Lecture 3 – Part 3 M. Pawan Kumar Slides available online
Technical Question Technical Question
ISM 206 Lecture 3 The Simplex Method. Announcements.
1 Linear Programming Supplements (Optional). 2 Standard Form LP (a.k.a. First Primal Form) Strictly ≤ All x j 's are non-negative.
5.3 Complex Numbers; Quadratic Equations with a Negative Discriminant.
Table of Contents First, isolate the absolute value expression. Linear Absolute Value Equation: Solving Algebraically Example 1: Solve Next, examine the.
4.4 – Solving Absolute Value Equations. Absolute Value = denoted by |x|, is the distant a number is from zero Always a positive number! (or zero)
Solving Quadratic Equations by Completing the Square
Today in Pre-Calculus You will get your test back tomorrow. Notes: –Polynomial Functions –Linear Functions Homework.
4.5 Solving Systems using Matrix Equations and Inverses.
C&O 355 Mathematical Programming Fall 2010 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
4.5 Solving Systems using Matrix Equations and Inverses OBJ: To solve systems of linear equations using inverse matrices & use systems of linear equations.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
The Simplex Method. Standard Linear Programming Problem Standard Maximization Problem 1. All variables are nonnegative. 2. All the constraints (the conditions)
CS38 Introduction to Algorithms Lecture 16 May 22, CS38 Lecture 16.
Solving Systems Using Elimination
1 Max 8X 1 + 5X 2 (Weekly profit) subject to 2X 1 + 1X 2  1000 (Plastic) 3X 1 + 4X 2  2400 (Production Time) X 1 + X 2  700 (Total production) X 1.
HAWKES LEARNING SYSTEMS Students Matter. Success Counts. Copyright © 2013 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Section A.4.
Optimization - Lecture 4, Part 1 M. Pawan Kumar Slides available online
Matrix Condition Numbers
X y x-y · 4 -y-2x · 5 -3x+y · 6 x+y · 3 Given x, for what values of y is (x,y) feasible? Need: y · 3x+6, y · -x+3, y ¸ -2x-5, and y ¸ x-4 Consider the.
C&O 355 Mathematical Programming Fall 2010 Lecture 5 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
Linear Programming Maximize Subject to Worst case polynomial time algorithms for linear programming 1.The ellipsoid algorithm (Khachian, 1979) 2.Interior.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
Table of Contents First get all nonzero terms on one side. Quadratic Equation: Solving by factoring Example: Solve 6x 2 – 13x = 8. 6x 2 – 13x – 8 = 0 Second.
Solving Linear Systems by Substitution
4.7 Solving Systems using Matrix Equations and Inverses
Copyright © 2011 Pearson Education, Inc. Equations Involving Absolute Value Solve equations involving absolute value.
Differential Equations Linear Equations with Variable Coefficients.
Proving that a Valid Inequality is Facet-defining  Ref: W, p  X  Z + n. For simplicity, assume conv(X) bounded and full-dimensional. Consider.
Absolute Value (of x) Symbol |x| The distance x is from 0 on the number line. Always positive Ex: |-3|=
Chapter 8 Systems of Linear Equations in Two Variables Section 8.3.
Submodularity Reading Group Submodular Function Minimization via Linear Programming M. Pawan Kumar
3.8B Solving Systems using Matrix Equations and Inverses.
Algebra Review. Systems of Equations Review: Substitution Linear Combination 2 Methods to Solve:
 How do I solve a system of Linear equations using the graphing method?
1.4 The Matrix Equation Ax = b
Linear Equations in One Variable
Chapter 6 Section 3.
MATH 374 Lecture 7 Linear Equations.
Proving that a Valid Inequality is Facet-defining
Notes Over 9.6 An Equation with One Solution
MA10210: Algebra 1B
Chap 9. General LP problems: Duality and Infeasibility
Preview Linear Systems Activity (on worksheet)
Solve a system of linear equation in two variables
The Quadratic Formula.
Solve System by Linear Combination / Addition Method
Chapter 4. Duality Theory
What can we know from RREF?
Chapter 5. The Duality Theorem
5.1 Solving Systems of Equations by Graphing
Solving a system of equations
Systems of Equations Solve by Graphing.
Proving that a Valid Inequality is Facet-defining
Chapter 2. Simplex method
Example 2B: Solving Linear Systems by Elimination
1.11 Use Inverse Matrices to Solve Linear Systems
Solving a System of Linear Equations
Chapter 2. Simplex method
Presentation transcript:

Solving a system of equations 1 x x x 3 = 3 2 x x x 3 = 2 3 x x x 3 = 3

Solving a system of equations 1 x x x 3 = 3 2 x x x 3 = 2 3 x x x 3 = = 2 no solution

Solving a system of equations a 11 x 1 + a 12 x 2 + a 13 x 3 = b 1 a 21 x 1 + a 22 x 2 + a 23 x 3 = b 2 a 31 x 1 + a 32 x 2 + a 33 x 3 = b 3 A x = b has a solution iff rank (A) = rank (A b)

Solving a system of equations A x = b has a solution iff rank (A) = rank (A b) rank (A) < rank (A b) then  y such that y T A = 0 and y T b  0

Solving a system of equations A x = b has a solution iff rank (A) = rank (A b) rank (A) < rank (A b) then  y such that y T A = 0 and y T b  0 y is a witness that A x = b does not have a solution

Solving a system of equations y T A = 0 and y T b  0 y is a witness that A x = b does not have a solution A x = b  y T (A x) = y T b  (y T A) x = y T b  0 = y T b  a contradiction

Solving a system of equations Theorem: If Ax=b doesn’t have solution then  y such that y T A =0 and y T b  0

Solving a system of equations 1 x x x 3 = 3 2 x x x 3 = 2 3 x x x 3 = 1

Solving a system of equations 1 x x x 3 = 3 2 x x x 3 = 2 3 x x x 3 = 1 x 1 = -3 x 2 = 2 x 3 = 0

Back to linear programming 1 x x x 3 = 3 2 x x x 3 = 2 3 x x x 3 = 1 x 1  0 x 2  0 x 3  0 I.e., we want a non-negative solution.

Back to linear programming 1 x x x 3 = 3 2 x x x 3 = 2 3 x x x 3 = 1 x 1  0 x 2  0 x 3  0 I.e., we want a non-negative solution. 0 1 x 1 + x 2 + x 3 = -1

y T A  0 and y T b < 0 y is a witness that A x = b, x  0 does not have a solution A x = b  y T (A x) = y T b  (y T A) x = y T b  non-negative = negative, a contradiction Back to linear programming

Theorem (Farkas): If Ax=b, x  0 doesn’t have a solution then  y such that y T A  0 and y T b < 0 Theorem: If Ax=b doesn’t have solution then  y such that y T A =0 and y T b  0 Back to linear programming

Theorem (Farkas): If Ax=b, x  0 doesn’t have a solution then  y such that y T A  0 and y T b < 0 Idea of the proof Ax, x  0 b

Theorem (Farkas): If Ax=b, x  0 doesn’t have a solution then  y such that y T A  0 and y T b < 0 Idea of the proof Ax, x  0 b cTxcTx separating hyperplane S=convex, b not in S  c such that (  x  S)c T x > c T b

Theorem (Farkas): If Ax=b, x  0 doesn’t have a solution   y such that y T A  0 and y T b < 0 Duality max c T x Ax=b x  0 min y T b y T A  c T

a 1 x a n x n  b a 1 x a n x n  b + y, y  0 a 1 x a n x n – y  b, y  0  “  ”   “=” and non-negativity

a 1 x a n x n  b a 1 x a n x n  b a 1 x a n x n  b  “  ”   “  ” a 1 x a n x n  b -a 1 x a n x n  -b

Duality max c 1 T x 1 +c 2 T x 2 + c 3 T x 3 + c  T x 4 A 1 x 1 =b 1 A 2 x 2  b 2 A 3 x 3 = b 3 A 4 x 4  b 4 x 1  0,x 2  0 min y 1 T b 1 +y 2 T b 2 +y 3 T b 3 + y 4 T b 4 y 1 T A 1  c 1 T y 2 T A 2  c 2 T y 3 T A 3 = c 3 T y 4 T A 4 = c 4 T y 2  0, y 4  0

Solving linear programs Simplex (Danzig, 40’s) Ellipsoid (Khachiyan, 80’s) Interior point (Karmakar, 80’s)