Lecture 21 Amortized Analysis

Slides:



Advertisements
Similar presentations
1 Union-find. 2 Maintain a collection of disjoint sets under the following two operations S 3 = Union(S 1,S 2 ) Find(x) : returns the set containing x.
Advertisements

AVL Trees1 Part-F2 AVL Trees v z. AVL Trees2 AVL Tree Definition (§ 9.2) AVL trees are balanced. An AVL Tree is a binary search tree such that.
ArrayLists David Kauchak cs201 Spring Extendable array Arrays store data in sequential locations in memory Elements are accessed via their index.
Andreas Klappenecker [Based on slides by Prof. Welch]
CPSC 411, Fall 2008: Set 7 1 CPSC 411 Design and Analysis of Algorithms Set 7: Disjoint Sets Prof. Jennifer Welch Fall 2008.
Andreas Klappenecker [based on the slides of Prof. Welch]
CSE 373, Copyright S. Tanimoto, 2002 Up-trees - 1 Up-Trees Review of the UNION-FIND ADT Straight implementation with Up-Trees Path compression Worst-case.
Binomial Heaps Jyh-Shing Roger Jang ( 張智星 ) CSIE Dept, National Taiwan University.
Ajinkya Nene, Lynbrook CS Club. 04/21/2014. Definition Allows you to figure the worst-case bound for the performance of an algorithm (most useful for.
Computer Algorithms Submitted by: Rishi Jethwa Suvarna Angal.
Union-find Algorithm Presented by Michael Cassarino.
Amortized Algorithm Analysis COP3503 July 25, 2007 Andy Schwartz.
Union Find ADT Data type for disjoint sets: makeSet(x): Given an element x create a singleton set that contains only this element. Return a locator/handle.
CSE373: Data Structures & Algorithms Lecture 11: Implementing Union-Find Nicki Dell Spring 2014.
David Luebke 1 12/12/2015 CS 332: Algorithms Amortized Analysis.
CSCE 411H Design and Analysis of Algorithms Set 7: Disjoint Sets Prof. Evdokia Nikolova* Spring 2013 CSCE 411H, Spring 2013: Set 7 1 * Slides adapted from.
Amortized Analysis In amortized analysis, the time required to perform a sequence of operations is averaged over all the operations performed Aggregate.
Amortized Analysis and Heaps Intro David Kauchak cs302 Spring 2013.
Advanced Sorting 7 2  9 4   2   4   7
Sorting With Priority Queue In-place Extra O(N) space
CSE 373: Data Structures and Algorithms
Andreas Klappenecker [partially based on the slides of Prof. Welch]
COMP 53 – Week Fourteen Trees.
Data Structures: Disjoint Sets, Segment Trees, Fenwick Trees
CSE 373, Copyright S. Tanimoto, 2001 Up-trees -
CSCE 411 Design and Analysis of Algorithms
Disjoint Sets Chapter 8.
Lecture 16 Amortized Analysis
Heaps Binomial Heaps Lazy Binomial Heaps 1.
Hashing Exercises.
Map interface Empty() - return true if the map is empty; else return false Size() - return the number of elements in the map Find(key) - if there is an.
Data Structures: Disjoint Sets
Objectives Introduce different known sorting algorithms
Priority Queues and Heaps
Dr. David Matuszek Heapsort Dr. David Matuszek
i206: Lecture 14: Heaps, Graphs intro.
CSE373: Data Structures & Algorithms Lecture 11: Implementing Union-Find Linda Shapiro Spring 2016.
Disjoint Set Neil Tang 02/26/2008
Tree Representation Heap.
Heaps and Priority Queues
Heaps 12/4/2018 5:27 AM Heaps /4/2018 5:27 AM Heaps.
Ch. 8 Priority Queues And Heaps
CSE 373 Data Structures and Algorithms
CSCE 411 Design and Analysis of Algorithms
Heaps Chapter 11 has several programming projects, including a project that uses heaps. This presentation shows you what a heap is, and demonstrates.
CSE 332: Data Abstractions Union/Find II
ICS 353: Design and Analysis of Algorithms
Heapsort.
CSCE 411 Design and Analysis of Algorithms
CSE 373: Data Structures and Algorithms
Union-Find Partition Structures
CSE373: Data Structures & Algorithms Implementing Union-Find
CS 332: Algorithms Amortized Analysis Continued
Binary and Binomial Heaps
Minimum Spanning Tree.
Sorting And Searching CSE116A,B 4/7/2019 B.Ramamurthy.
Disjoint Sets DS.S.1 Chapter 8 Overview Dynamic Equivalence Classes
CENG 351 Data Management and File Structures
Amortized Analysis and Heaps Intro
Heapsort.
Priority Queues & Heaps
Heapsort.
Linked List Intro CSCE 121.
CSE 373 Data Structures Lecture 12
CS203 Lecture 15.
Lecture 20 Hashing Amortized Analysis
CO 303 Algorithm Analysis and Design
CSE 373: Data Structures and Algorithms
General Trees A general tree T is a finite set of one or more nodes such that there is one designated node r, called the root of T, and the remaining nodes.
CSCE 411 Design and Analysis of Algorithms
Presentation transcript:

Lecture 21 Amortized Analysis

Dynamic Array problem Design a data-structure to store an array. Items can be added to the end of the array. At any time, the amount of memory should be proportional to the length of the array. Example: ArrayList in java, vector in C++ Goal: Design a data-structure such that adding an item has O(1) amortized running time.

Designing the Data-Structure We don’t want to allocate new memory every time. Idea: when allocating new memory, allocate a larger space. Init() capacity = 1 length = 0 allocate an array a[] of 1 element Add(x) IF length < capacity THEN a[length] = x length = length+1 ELSE capacity = capacity * 2 allocate a new array a[] of capacity elements copy the current array to the new array

How to Analyze? There are 3 basic techniques to analyze the amortized cost of operations. Aggregate Method Accounting (charging) method Potential Argument

Aggregate Method Idea: Compute the total cost of n operations, divide the total cost by n. What’s used in analyzing MergeSort and DFS. Aggregate method for Dynamic Array: how many operations are needed for adding n numbers?

Accounting (charging) method Idea: Small number of expensive operations, many normal operations  If every normal operation pays a little bit extra, that can be enough to pay for the expensive operations. Major step: Design a way of “charging” the expensive operations to the normal operations.

Potential Argument Recall: Law of physics

Potential Argument Define a “potential function” Φ,Φ≥0. When executing an expensive operation, the potential function should drop (Potential turns into energy) When executing a normal (light) operation, the potential function should increase (Energy turns into potential)

Potential Argument Amortized cost of an operation: Suppose an operation took (real) time Ti, changed the status from xi to xi+1 𝐴𝑖=𝑇𝑖 −Φ 𝑥 𝑖 +Φ( 𝑥 𝑖+1 ) Claim: 𝑖=1 𝑛 𝑇 𝑖 = 𝑖=1 𝑛 𝐴 𝑖 +Φ 𝑥 1 −Φ( 𝑥 𝑛+1 ) Amortized Cost Real Cost Potential Before Potential After

Comparison between 3 methods Aggregate method Intuitive Does not work very well if there are multiple operations (e.g. stack: push/pop; heap: insert/pop/decrease key) Charging Method Flexible (you can choose what operations to charge on and how much to charge) Needs to come up with charging scheme Potential Method Very flexible Potential function is not always easy to come up with.

Data-structure for Disjoint Sets Problem: There are n elements, each in a separate set. Want to build a data structure that supports two operations: Find: Given an element, find the set it is in. (Each set is identified by a single “head” element in the set) Merge: Given two elements, merge the sets that they are in. Recall: Kruskal’s algorithm.

Representing Sets using Trees For each element, think of it as a node. Each subset is a tree, and the “head” element is the root. Find: find the root of the tree Merge: merge two trees into a single tree.

Example Sets {1, 3}, {2, 5, 6, 8}, {4}, {7} Note: not necessarily binary trees. 2 4 7 1 3 8 6 5

Find Operation: Follow pointer to the parent until reach the root. 4 7 2 4 7 1 3 8 6 5

Merge Operation Make the root of one set as a child for another set 7 2 7 1 4 3 8 6 5

Running Time Find: Depth of the tree. Merge: First need to do two find operation, then spend O(1) to link the two trees. In the worst-case, the tree is just a linked list Depth = n.

Idea 1: Union by rank For each root, also store a “rank” When merging two sets, always use the set with higher rank as the new root. If two sets have the same rank, increase the rank of the new root after merging. 1 1 1 4 4 3 3 7

Idea 2: Path compression. After a find operation, connect everything along the way directly to the root. 2 8 6 5 3 4 7 1 2 8 6 Find(7) 3 5 4 7 1

Running Time Union by rank only Union by rank + Path compression Depth is always bounded by log n O(log n) worst-case, O(log n) amortized Union by rank + Path compression Worst case is still O(log n) Amortized: O(𝛼(𝑛)) = o(log*n)