Efficiency of the simplex method LI Xiao-lei. Performance measures Can be divided into two types: Can be divided into two types: –Worst case A worst-case.

Slides:



Advertisements
Similar presentations
Growth-rate Functions
Advertisements

College of Information Technology & Design
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
MATH 224 – Discrete Mathematics
Introduction to Sensitivity Analysis Graphical Sensitivity Analysis
CHAPTER 8 More About Estimation. 8.1 Bayesian Estimation In this chapter we introduce the concepts related to estimation and begin this by considering.
Algebra Problems… Solutions Algebra Problems… Solutions © 2007 Herbert I. Gross Set 13 By Herbert I. Gross and Richard A. Medeiros next.
CSE 373: Data Structures and Algorithms Lecture 5: Math Review/Asymptotic Analysis III 1.
Fundamentals of Python: From First Programs Through Data Structures
CompSci 102 Discrete Math for Computer Science
Algorithm Analysis (Big O) CS-341 Dick Steflik. Complexity In examining algorithm efficiency we must understand the idea of complexity –Space complexity.
Algorithmic Complexity Nelson Padua-Perez Bill Pugh Department of Computer Science University of Maryland, College Park.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
CS 206 Introduction to Computer Science II 09 / 10 / 2008 Instructor: Michael Eckmann.
Introduction to Analysis of Algorithms
Complexity Analysis (Part I)
Analysis of Algorithms. Time and space To analyze an algorithm means: –developing a formula for predicting how fast an algorithm is, based on the size.
Complexity Analysis (Part I)
Analysis of Algorithms1 Estimate the running time Estimate the memory space required. Time and space depend on the input size.
DAST 2005 Week 4 – Some Helpful Material Randomized Quick Sort & Lower bound & General remarks…
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
1 Section 2.3 Complexity of Algorithms. 2 Computational Complexity Measure of algorithm efficiency in terms of: –Time: how long it takes computer to solve.
Solving Systems of Linear Equations using Elimination
The Game of Algebra or The Other Side of Arithmetic The Game of Algebra or The Other Side of Arithmetic © 2007 Herbert I. Gross by Herbert I. Gross & Richard.
1.1 Chapter 1: Introduction What is the course all about? Problems, instances and algorithms Running time v.s. computational complexity General description.
HOW TO SOLVE IT? Algorithms. An Algorithm An algorithm is any well-defined (computational) procedure that takes some value, or set of values, as input.
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Algorithm Input Output An algorithm is a step-by-step procedure for solving a problem in a finite amount of time. Chapter 4. Algorithm Analysis (complexity)
Chapter 6 Linear Programming: The Simplex Method
Simplex method (algebraic interpretation)
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Learning Objectives for Section 6.4 The student will be able to set up and solve linear programming problems.
Order Statistics The ith order statistic in a set of n elements is the ith smallest element The minimum is thus the 1st order statistic The maximum is.
Analysis of Algorithms
Chapter 3 Sec 3.3 With Question/Answer Animations 1.
Lecture 2 Algorithm Analysis Arne Kutzner Hanyang University / Seoul Korea.
CS 221 Analysis of Algorithms Instructor: Don McLaughlin.
© 2011 Pearson Addison-Wesley. All rights reserved 10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
Chapter 10 A Algorithm Efficiency. © 2004 Pearson Addison-Wesley. All rights reserved 10 A-2 Determining the Efficiency of Algorithms Analysis of algorithms.
Analysis of algorithms Analysis of algorithms is the branch of computer science that studies the performance of algorithms, especially their run time.
1Computer Sciences Department. Book: Introduction to Algorithms, by: Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Clifford Stein Electronic:
Today’s topics Orders of growth of processes Relating types of procedures to different orders of growth.
FIRST QUESTIONS FOR ALGORITHM ANALYSIS. WHAT IS AN ALGORITHM? From the text (p. 3): “An algorithm is a sequence of unambiguous instructions for solving.
New Mexico Computer Science For All Algorithm Analysis Maureen Psaila-Dombrowski.
1 1 Slide © 2005 Thomson/South-Western Linear Programming: The Simplex Method n An Overview of the Simplex Method n Standard Form n Tableau Form n Setting.
Chapter 4 Linear Programming: The Simplex Method
Linear Programming Implementation. Linear Programming
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
Mixed Numbers and Percents © Math As A Second Language All Rights Reserved next #7 Taking the Fear out of Math 275%
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
Sorting Algorithm Analysis. Sorting  Sorting is important!  Things that would be much more difficult without sorting: –finding a phone number in the.
Ch : Solving Systems of Equations Algebraically.
Introduction to Algorithms (2 nd edition) by Cormen, Leiserson, Rivest & Stein Chapter 2: Getting Started.
27-Jan-16 Analysis of Algorithms. 2 Time and space To analyze an algorithm means: developing a formula for predicting how fast an algorithm is, based.
Algorithm Analysis. What is an algorithm ? A clearly specifiable set of instructions –to solve a problem Given a problem –decide that the algorithm is.
1 Optimization Techniques Constrained Optimization by Linear Programming updated NTU SY-521-N SMU EMIS 5300/7300 Systems Analysis Methods Dr.
1. Searching The basic characteristics of any searching algorithm is that searching should be efficient, it should have less number of computations involved.
1 Ch. 2: Getting Started. 2 About this lecture Study a few simple algorithms for sorting – Insertion Sort – Selection Sort (Exercise) – Merge Sort Show.
Sampling Design and Analysis MTH 494 Lecture-21 Ossam Chohan Assistant Professor CIIT Abbottabad.
Lecture 2 Algorithm Analysis Arne Kutzner Hanyang University / Seoul Korea.
Algebra Problems… Solutions Algebra Problems… Solutions © 2007 Herbert I. Gross Set 16 By Herbert I. Gross and Richard A. Medeiros next.
Algorithmics - Lecture 41 LECTURE 4: Analysis of Algorithms Efficiency (I)
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
GC 211:Data Structures Week 2: Algorithm Analysis Tools Slides are borrowed from Mr. Mohammad Alqahtani.
CSE 3358 NOTE SET 2 Data Structures and Algorithms 1.
1 2 Linear Programming Chapter 3 3 Chapter Objectives –Requirements for a linear programming model. –Graphical representation of linear models. –Linear.
Complexity Analysis (Part I)
CMPT 120 Topic: Searching – Part 1
CSE 2010: Algorithms and Data Structures Algorithms
Ch. 2: Getting Started.
Presentation transcript:

Efficiency of the simplex method LI Xiao-lei

Performance measures Can be divided into two types: Can be divided into two types: –Worst case A worst-case analysis looks at all problems of a given “ size ” and asks how much effort is needed to solve the hardest of these problems. –Average case An average-case analysis looks at the average amount of effort, averaging over all problems of a given size.

Performance measures For worst-case analyses, one needs to give an upper bound on how much effort is required and then exhibit a specific example that attains this bound. For worst-case analyses, one needs to give an upper bound on how much effort is required and then exhibit a specific example that attains this bound. For average-case analyses, one must have a stochastic model of the space of “ random linear programming problems ” and then be able to say something about the solution effort averaged over all the problems in the sample space. For average-case analyses, one must have a stochastic model of the space of “ random linear programming problems ” and then be able to say something about the solution effort averaged over all the problems in the sample space.

Measuring the size of a problem How do we specify the size of a problem? How do we specify the size of a problem? Two parameters come naturally to mind: m and n. Two parameters come naturally to mind: m and n. Since the data for a problem consist of the constraint coefficients together with the right-hand side and objective function coefficients, we should use the total number of data elements to indicate size, which is roughly mn. Since the data for a problem consist of the constraint coefficients together with the right-hand side and objective function coefficients, we should use the total number of data elements to indicate size, which is roughly mn.

Measuring the size of a problem What if many or even most of the data elements are zero? What if many or even most of the data elements are zero? Efficient implementations do indeed take advantage of the presence of lots of zeros, and so an analysis should also account for this. Hence, a good measure might be simply the number of nonzero data elements.

Measuring the size of a problem For a person to solve a problem by hand (or use unlimited precision computation on a computer), multiplying 23 by 7 is a lot easier than multiplying by For a person to solve a problem by hand (or use unlimited precision computation on a computer), multiplying 23 by 7 is a lot easier than multiplying by So the best measure of a problem ’ s size is the actual number of bits needed to store all the data on a computer. This measure is popular and is usually denoted by L. So the best measure of a problem ’ s size is the actual number of bits needed to store all the data on a computer. This measure is popular and is usually denoted by L.

Measuring the size of a problem However, L is seen to be ambiguous. However, L is seen to be ambiguous. Real-world problems, while generally large and sparse, usually can be described quite simply and involve only a small amount of true input data. Real-world problems, while generally large and sparse, usually can be described quite simply and involve only a small amount of true input data. We shall simply focus on m and n to characterize the size of a problem. We shall simply focus on m and n to characterize the size of a problem.

Measuring the effort to solve a problem How to measure the amount of work required to solve a problem? How to measure the amount of work required to solve a problem? –The best answer is the number of seconds of computer time required to solve the problem. –Unfortunately, not all the people use the exact same computer.

Measuring the effort to solve a problem Algorithms are generally iterative processes, and the time to solve a problem can be facted into the number of iterations required to solve the problem times the amount of time required to do each iteration. Algorithms are generally iterative processes, and the time to solve a problem can be facted into the number of iterations required to solve the problem times the amount of time required to do each iteration. The number of iterations does not depend on the computer. It is useful when comparing various algorithms within the same general class of algorithms; however, it becomes meaningless when one wishes to compare two entirely different algorithms. The number of iterations does not depend on the computer. It is useful when comparing various algorithms within the same general class of algorithms; however, it becomes meaningless when one wishes to compare two entirely different algorithms.

Worst-case analysis of the simplex method For noncycling variants of the simplex method, sine the simplex method operates by moving from one bfs to another without ever returning to a previously visited solution, an upper bound on the number of iterations is simply the number of bfs, of which there can be at most For noncycling variants of the simplex method, sine the simplex method operates by moving from one bfs to another without ever returning to a previously visited solution, an upper bound on the number of iterations is simply the number of bfs, of which there can be at most

Worst-case analysis of the simplex method The expression is maximized when m=n, and The expression is maximized when m=n, and The expression is huge when n is not very big. For example, 2 50 =1.1259×10 15 The example given by V.Klee and G. J. Minty.