Practice Quiz Question

Slides:



Advertisements
Similar presentations
Introduction to Algorithms Quicksort
Advertisements

MATH 224 – Discrete Mathematics
Algorithms Analysis Lecture 6 Quicksort. Quick Sort Divide and Conquer.
Quick Sort, Shell Sort, Counting Sort, Radix Sort AND Bucket Sort
SORTING ROUTINES. OBJECTIVES INTRODUCTION BUBBLE SORT SELECTION SORT INSERTION SORT QUICK SORT MERGE SORT.
CPS120: Introduction to Computer Science Searching and Sorting.
QuickSort 4 February QuickSort(S) Fast divide and conquer algorithm first discovered by C. A. R. Hoare in If the number of elements in.
Data Structures Data Structures Topic #13. Today’s Agenda Sorting Algorithms: Recursive –mergesort –quicksort As we learn about each sorting algorithm,
Quick Sort. Quicksort Quicksort is a well-known sorting algorithm developed by C. A. R. Hoare. The quick sort is an in-place, divide- and-conquer, massively.
Fundamentals of Algorithms MCS - 2 Lecture # 16. Quick Sort.
Ver. 1.0 Session 5 Data Structures and Algorithms Objectives In this session, you will learn to: Sort data by using quick sort Sort data by using merge.
the fourth iteration of this loop is shown here
Lecture 25 Selection sort, reviewed Insertion sort, reviewed Merge sort Running time of merge sort, 2 ways to look at it Quicksort Course evaluations.
Quicksort. Quicksort I To sort a[left...right] : 1. if left < right: 1.1. Partition a[left...right] such that: all a[left...p-1] are less than a[p], and.
CHAPTER 11 Sorting.
Quicksort.
Selection Sort, Insertion Sort, Bubble, & Shellsort
Sorting CS-212 Dick Steflik. Exchange Sorting Method : make n-1 passes across the data, on each pass compare adjacent items, swapping as necessary (n-1.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
© 2006 Pearson Addison-Wesley. All rights reserved10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
Insertion Sort & Shellsort By: Andy Le CS146 – Dr. Sin Min Lee Spring 2004.
Insertion Sort By Daniel Tea. What is Insertion Sort? Simple sorting algorithm Builds the final list (or array) one at a time – A type of incremental.
COMP s1 Computing 2 Complexity
1 Data Structures and Algorithms Sorting. 2  Sorting is the process of arranging a list of items into a particular order  There must be some value on.
1 Complexity Lecture Ref. Handout p
Program Performance & Asymptotic Notations CSE, POSTECH.
Recursion, Complexity, and Searching and Sorting By Andrew Zeng.
1 Time Analysis Analyzing an algorithm = estimating the resources it requires. Time How long will it take to execute? Impossible to find exact value Depends.
Recursion, Complexity, and Sorting By Andrew Zeng.
Searching and Sorting Gary Wong.
Chapter 12 Recursion, Complexity, and Searching and Sorting
C++ Programming: Program Design Including Data Structures, Fourth Edition Chapter 19: Searching and Sorting Algorithms.
Computer Science Searching & Sorting.
Chapter 10 B Algorithm Efficiency and Sorting. © 2004 Pearson Addison-Wesley. All rights reserved 9 A-2 Sorting Algorithms and Their Efficiency Sorting.
C++ Programming: From Problem Analysis to Program Design, Second Edition Chapter 19: Searching and Sorting.
© 2006 Pearson Addison-Wesley. All rights reserved10 A-1 Chapter 10 Algorithm Efficiency and Sorting.
Centroids part 2 Getting rid of outliers and sorting.
Sorting CS 110: Data Structures and Algorithms First Semester,
Chapter 18: Searching and Sorting Algorithms. Objectives In this chapter, you will: Learn the various search algorithms Implement sequential and binary.
Java Methods Big-O Analysis of Algorithms Object-Oriented Programming
Sorting: Implementation Fundamental Data Structures and Algorithms Klaus Sutner February 24, 2004.
Review 1 Selection Sort Selection Sort Algorithm Time Complexity Best case Average case Worst case Examples.
UNIT 5.  The related activities of sorting, searching and merging are central to many computer applications.  Sorting and merging provide us with a.
Lecture No. 04,05 Sorting.  A process that organizes a collection of data into either ascending or descending order.  Can be used as a first step for.
Sorting – Part II CS 367 – Introduction to Data Structures.
Chapter 9 Sorting 1. The efficiency of data handling can often be increased if the data are sorted according to some criteria of order. The first step.
Chapter 9 Sorting. The efficiency of data handling can often be increased if the data are sorted according to some criteria of order. The first step is.
Data Structures - CSCI 102 Selection Sort Keep the list separated into sorted and unsorted sections Start by finding the minimum & put it at the front.
Intro To Algorithms Searching and Sorting. Searching A common task for a computer is to find a block of data A common task for a computer is to find a.
ArrayList is a class that implements the List interface. The List interface is a blueprint for its “implementor” classes. There is another implementor.
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
ICS201 Lecture 21 : Sorting King Fahd University of Petroleum & Minerals College of Computer Science & Engineering Information & Computer Science Department.
Chapter 9 sorting. Insertion Sort I The list is assumed to be broken into a sorted portion and an unsorted portion The list is assumed to be broken into.
C++ How to Program, 7/e © by Pearson Education, Inc. All Rights Reserved.
1 Ch. 2: Getting Started. 2 About this lecture Study a few simple algorithms for sorting – Insertion Sort – Selection Sort (Exercise) – Merge Sort Show.
PREVIOUS SORTING ALGORITHMS  BUBBLE SORT –Time Complexity: O(n 2 ) For each item, make (n –1) comparisons Gives: Comparisons = (n –1) + (n – 2)
Quicksort This is probably the most popular sorting algorithm. It was invented by the English Scientist C.A.R. Hoare It is popular because it works well.
CPS120: Introduction to Computer Science Sorting.
Data Structures and Algorithms Instructor: Tesfaye Guta [M.Sc.] Haramaya University.
WHICH SEARCH OR SORT IS BETTER?. COMPARING ALGORITHMS Time efficiency refers to how long it takes an algorithm to run Space efficiency refers to the amount.
Chapter 15 Running Time Analysis. Topics Orders of Magnitude and Big-Oh Notation Running Time Analysis of Algorithms –Counting Statements –Evaluating.
1 Algorithms Searching and Sorting Algorithm Efficiency.
Advanced Sorting.
CMPT 438 Algorithms.
Warmup What is an abstract class?
COMP 53 – Week Seven Big O Sorting.
Teach A level Computing: Algorithms and Data Structures
Binary Search Back in the days when phone numbers weren’t stored in cell phones, you might have actually had to look them up in a phonebook. How did you.
Core Assessments Core #1: This Friday (5/4) Core #2: Tuesday, 5/8.
Presentation transcript:

Practice Quiz Question Which sorting algorithm is used below? Selection sort or Insertion sort? How can you tell? 89 11 24 32 47 88 11 89 24 32 47 88 11 24 89 32 47 88 11 24 32 89 47 88 11 24 32 47 89 88 11 24 32 47 88 89 Go to http://goo.gl/A4njB

Note about Merge Sort Typically done with a second array Why? Since elements in both halves are already sorted, easier to compare elements at the beginning of the list/array More memory allows faster movement of data

Note about Merge Sort If odd number of elements, you have two different options for splitting Option 1: split so that the left half has one more than the right half (you are tested on this way) Option 2: split so that the right half has one more than the left half Either option works as long as you are consistent in your splitting (i.e. always give more to the left or always give more to the right)

Note about Merge Sort The way that you split matters! You must merge with the numbers you split from Has to do with the way recursion works Recursion breaks the sorting problem into a smaller sorting problem Have to look at each half as a new sorting problem Cannot sort with numbers from another problem unless you are merging with numbers you previously split from

Key Features of Algorithms Selection Sort Swapping lowest (or highest) value Insertion Sort Shifting values and inserting next value in correct spot Merge Sort Recursive splitting Recursive sorting and merging

Bubble Sort Starting from the beginning of the list, compare every adjacent pair, swap their position if they are not in the right order. After each iteration, one less element (the last one) is needed to be compared until there are no more elements left to be compared.

Quick Sort Pick an element, called a pivot, from the list. Reorder the list so that all elements with values less than the pivot come before the pivot all elements with values greater than the pivot come after it equal values can go either way After this partitioning, the pivot is in its final position. This is called the partition operation. Recursively sort the sub-list of lesser elements and the sub-list of greater elements.

Efficiency What is it? Why do we care? How do we measure?

What does “efficiency” mean? Measure of how long the algorithm takes to run to completion relative to the number of inputs it is given For our sorting algorithms, inputs are arrays, ArrayLists, other lists, etc.

Why do we care? Inefficient algorithms require more calculations/comparisons/memory Wasted time ( wasted money) Wasted power ( being less “green”)

How do we measure efficiency? Time? (Wall-clock time or execution time) Advantages: easy to measure, meaning is obvious Appropriate if you care about actual time (e.g. real-time systems) Disadvantages: applies only to specific data set, compiler, machine, etc. Typically not a good idea to measure by actual time Computers perform at different speeds

How do we measure efficiency? Better idea? Number of times certain statements are executed: Advantages: more general (not sensitive to speed of machine). Disadvantages: doesn’t tell you actual time, still applies only to specific data sets.

How do we measure efficiency? Symbolic execution times: That is, formulas for execution times or statement counts in terms of input size. Advantages: applies to all inputs, makes scaling clear. Disadvantage: practical formula must be approximate, may tell very little about actual time.

How do we measure efficiency? Since we are approximating anyway, pointless to be precise about certain things: Behavior on small inputs: Can always pre-calculate some results. Times for small inputs not usually important. Constant factors (as in “off by factor of 2”): Just changing machines causes constant-factor change. How to abstract away from (i.e., ignore) these things? Order notation

How do we measure efficiency? Big O notation Number of calculations/comparisons Best Case (lower bound) Worst Case (upper bound) Average Case Consider: Amount of space/memory used Extra space necessary for algorithm

Why does it matter again?

Random or Deterministic Random  different every time Deterministic  consistent every time Which is more efficient? E.g. int[] arr = {11, 12, 13, 14, 15, 16, 17, 18}; Randomly find index of 18? Deterministically find index of 18? Would it work if 18 was not in the array?

Iterative or Recursive Iterative more efficient in many cases Recursive can be more efficient in some cases (e.g. merge sort) Consider two different algorithms for calculating the 40th Fibonacci number Recursive takes a really long time Iterative only does each calculation once

Big O It measures the growth rate of the time the algorithm takes to complete with respect to the amount of data it is acting on. We write the big O of an algorithm as O(f(n)), where f(n) is a function of n. E.g. 𝑂( 𝑛 2 ) is pronounced "Oh of en squared". It is a function, O, of the time an algorithm takes to run. Approximation means no constants, no coefficients, etc.

Big O Linear: 𝑂(𝑛) Quadratic: 𝑂 𝑛 2 Exponential: 𝑂 𝑥 𝑛 E.g. Searching an array of length n Quadratic: 𝑂 𝑛 2 E.g. Searching an 2-D array of size n by n E.g. Comparing every element in a 1-D array of size n to every other element in the same array Exponential: 𝑂 𝑥 𝑛 x is some constant E.g. Recursive Fibonacci is 𝑂( 2 𝑛 )

Big O Logarithmic: 𝑂( log 𝑛) Constant: 𝑂(1) for each element acted on, eliminate some fraction of the remaining inputs E.g. repeatedly print the left half of an array or String Cannot reduce by a constant amount, must be a consistent fraction O(n log n)  merge sort Constant: 𝑂(1) Same amount of time regardless of inputs E.g. selecting a random element and printing it

“Exit poll” Why do we care about efficiency? http://goo.gl/wbUuh Tomorrow: Sorting Efficiency lab