2-May-15 Randomized Algorithms. 2 Also known as Monte Carlo algorithms or stochastic methods A short list of categories Algorithm types we will consider.

Slides:



Advertisements
Similar presentations
The Random Class.
Advertisements

Dynamic Programming 25-Mar-17.
MATH 224 – Discrete Mathematics
BackTracking Algorithms
Types of Algorithms.
Analysis of Algorithms
Greed is good. (Some of the time)
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Random variables 1. Note  there is no chapter in the textbook that corresponds to this topic 2.
Random Number Generation. Random Number Generators Without random numbers, we cannot do Stochastic Simulation Most computer languages have a subroutine,
Study Group Randomized Algorithms 21 st June 03. Topics Covered Game Tree Evaluation –its expected run time is better than the worst- case complexity.
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
Part 3: The Minimax Theorem
1 Today’s Material Medians & Order Statistics – Ch. 9.
15-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
16-May-15 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
Greedy vs Dynamic Programming Approach
1 Greedy Algorithms. 2 2 A short list of categories Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide.
CSE 830: Design and Theory of Algorithms
Simulation.
Pseudorandom Number Generators
Quicksort.
This material in not in your text (except as exercises) Sequence Comparisons –Problems in molecular biology involve finding the minimum number of edit.
Module C9 Simulation Concepts. NEED FOR SIMULATION Mathematical models we have studied thus far have “closed form” solutions –Obtained from formulas --
Recursion Chapter 7. Chapter 7: Recursion2 Chapter Objectives To understand how to think recursively To learn how to trace a recursive method To learn.
Cmpt-225 Simulation. Application: Simulation Simulation  A technique for modeling the behavior of both natural and human-made systems  Goal Generate.
Algorithm design techniques
1 Building Java Programs Chapter 5 Lecture 5-2: Random Numbers reading: 5.1, 5.6.
1 CE 530 Molecular Simulation Lecture 7 David A. Kofke Department of Chemical Engineering SUNY Buffalo
1 Statistical Mechanics and Multi- Scale Simulation Methods ChBE Prof. C. Heath Turner Lecture 11 Some materials adapted from Prof. Keith E. Gubbins:
1 9/8/2015 MATH 224 – Discrete Mathematics Basic finite probability is given by the formula, where |E| is the number of events and |S| is the total number.
Minimax Trees: Utility Evaluation, Tree Evaluation, Pruning CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 2 Adapted from slides of Yoonsuck.
Recursion, Complexity, and Searching and Sorting By Andrew Zeng.
1 9/23/2015 MATH 224 – Discrete Mathematics Basic finite probability is given by the formula, where |E| is the number of events and |S| is the total number.
Randomized Algorithms (Probabilistic algorithm) Flip a coin, when you do not know how to make a decision!
Stochastic Algorithms Some of the fastest known algorithms for certain tasks rely on chance Stochastic/Randomized Algorithms Two common variations – Monte.
Recursion Chapter 7. Chapter Objectives  To understand how to think recursively  To learn how to trace a recursive method  To learn how to write recursive.
Section 8.1 Estimating  When  is Known In this section, we develop techniques for estimating the population mean μ using sample data. We assume that.
Analysis of Algorithms
Some Uses of Probability Randomized algorithms –for CS in general –for games and robotics in particular Testing Simulation Solving probabilistic problems.
Design of Algorithms using Brute Force Approach. Primality Testing (given number is n binary digits)
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
1.3 Simulations and Experimental Probability (Textbook Section 4.1)
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 11 Understanding Randomness.
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
1 Building Java Programs Chapter 5 Lecture 5-2: Random Numbers reading: 5.1, 5.6.
Loops Wrap Up 10/21/13. Topics *Sentinel Loops *Nested Loops *Random Numbers.
M ONTE C ARLO SIMULATION Modeling and Simulation CS
Probabilistic Analysis and Randomized Algorithm. Average-Case Analysis  In practice, many algorithms perform better than their worse case  The average.
Hashtables. An Abstract data type that supports the following operations: –Insert –Find –Remove Search trees can be used for the same operations but require.
1 1 Slide Simulation Professor Ahmadi. 2 2 Slide Simulation Chapter Outline n Computer Simulation n Simulation Modeling n Random Variables and Pseudo-Random.
1 1 Slide © 2004 Thomson/South-Western Simulation n Simulation is one of the most frequently employed management science techniques. n It is typically.
Dynamic Programming.  Decomposes a problem into a series of sub- problems  Builds up correct solutions to larger and larger sub- problems  Examples.
Greedy Algorithms BIL741: Advanced Analysis of Algorithms I (İleri Algoritma Çözümleme I)1.
Dynamic Programming … Continued 0-1 Knapsack Problem.
9-Feb-16 Dynamic Programming. 2 Algorithm types Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide and.
Copyright © 2014 Curt Hill Algorithms From the Mathematical Perspective.
Copyright © 2009 Pearson Education, Inc. Chapter 11 Understanding Randomness.
1 Building Java Programs Chapter 5 Lecture 11: Random Numbers reading: 5.1, 5.6.
Introduction to programming in java Lecture 16 Random.
Dynamic Programming 26-Apr-18.
Games with Chance Other Search Algorithms
Objective of This Course
CIT Nov-18.
Randomized Algorithms
Dynamic Programming 23-Feb-19.
Quicksort.
Algorithm Course Algorithms Lecture 3 Sorting Algorithm-1
Quicksort.
Presentation transcript:

2-May-15 Randomized Algorithms

2 Also known as Monte Carlo algorithms or stochastic methods A short list of categories Algorithm types we will consider include: Simple recursive algorithms Backtracking algorithms Divide and conquer algorithms Dynamic programming algorithms Greedy algorithms Branch and bound algorithms Brute force algorithms Randomized algorithms

3 A randomized algorithm is just one that depends on random numbers for its operation These are randomized algorithms: Using random numbers to help find a solution to a problem Using random numbers to improve a solution to a problem These are related topics: Getting or generating “random” numbers Generating random data for testing (or other) purposes

4 Pseudorandom numbers The computer is not capable of generating truly random numbers The computer can only generate pseudorandom numbers-- numbers that are generated by a formula Pseudorandom numbers look random, but are perfectly predictable if you know the formula Pseudorandom numbers are good enough for most purposes, but not all--for example, not for serious security applications Devices for generating truly random numbers do exist They are based on radioactive decay, or on lava lamps “Anyone who attempts to generate random numbers by deterministic means is, of course, living in a state of sin.” —John von Neumann

5 Generating random numbers Perhaps the best way of generating “random” numbers is by the linear congruential method: r = (a * r + b) % m; where a and b are large prime numbers, and m is 2 32 or 2 64 The initial value of r is called the seed If you start over with the same seed, you get the same sequence of “random” numbers One advantage of the linear congruential method is that it will (eventually) cycle through all possible numbers Almost any “improvement” on this method turns out to be worse “Home-grown” methods typically have much shorter cycles There are complex mathematical tests for “randomness” which you should know if you want to “roll your own”

6 Getting pseudorandom numbers in Java import java.util.Random; new Random(long seed) // constructor new Random() // constructor, uses System.timeInMillis() as seed void setSeed(long seed) nextBoolean() nextFloat(), nextDouble() // 0.0 < return value < 1.0 nextInt(), nextLong() // all 2 32 or 2 64 possibilities nextInt(int n) // 0 < return value < n nextGaussian() Returns a double, normally distributed with a mean of 0.0 and a standard deviation of 1.0

7 Gaussian distributions The Gaussian distribution is a normal curve with mean (average) of 0.0 and standard deviation of 1.0 It looks like this: Much real “natural” data—shoe sizes, quiz scores, amount of rainfall, length of life, etc.—fit a normal curve To generate realistic data, the Gaussian may be what you want: r = desiredStandardDeviation * random.nextGaussian() + desiredMean; “Unnatural data,” such as income or taxes, may or may not be well described by a normal curve 33% 33% 17%

8 Shuffling an array Good: static void shuffle(int[] array) { for (int i = array.length; i > 0; i--) { int j = random.nextInt(i); swap(array, i - 1, j); } } // all permutations are equally likely Bad: static void shuffle(int[] array) { for (int i = 0; i < array.length; i++) { int j = random.nextInt(array.length); swap(array, i, j); } } // all permutations are not equally likely

9 Checking randomness With a completely random shuffle, the probability that an element will end up in a given location is equal to the probability that it will end up in any other location I claim that the second algorithm on the preceding slide is not completely random I don’t know how to prove this, but I can demonstrate it Declare an array counts of 20 locations, initially all zero Do the following times: Fill an array of size 20 with the numbers 1, 2, 3,..., 20 Shuffle the array Find the location loc at which the 5 (or some other number) ends up Add 1 to counts[loc] Print the counts array

10 Shuffling out of place Here’s the “good” shuffle from a previous slide: static void shuffle(int[] array) { for (int i = array.length; i > 0; i--) { int j = random.nextInt(i); swap(array, i - 1, j); } } // all permutations are equally likely Shuffle with every item moved elsewhere: static void shuffle(int[] array) { for (int i = array.length; i > 1; i--) { int j = random.nextInt(i - 1); swap(array, i - 1, j); } } // all permutations are not equally likely

11 Randomly choosing N things Some students had an assignment in which they had to choose exactly N distinct random locations in an array Their algorithm was something like this: for i = 1 to n { choose a random location x if x was previously chosen, start over } Can you do better?

12 StupidSort Here’s a related technique: void stupidSort(int[] array) { while (!sorted(array)) { shuffle(array); } } This is included as an example of one of the worst algorithms known You should, however, be able to analyze (compute the running time of) this algorithm

13 Analyzing StupidSort void stupidSort(int[] array) { while (!sorted(array)) { shuffle(array); } } Let’s assume good implementations of the called methods You can check if an array is sorted in O(n) time ( n = size of the array) The shuffle method given earlier takes O(n) time (check this) There are n! possible arrangements of the array If we further assume all elements are unique, then there is only one correct arrangement--so the odds for each shuffle are 1 in n! If the odds of something are 1 in x, then on average we have to wait x trials for it to occur (I won’t prove this here) Hence, running time is O(( test + shuffle )*n!) = O(2n * n!) = O((n + 1)!)

14 Zero-sum two-person games A zero-sum two person game is represented by an array such as the following: Red Blue The game is played as follows: Red chooses a row and, at the same time, Blue chooses a column The number where the row and column intersect is the amount of money that Blue pays to Red (negative means Red pays Blue) Repeat as often as desired The best strategy is to play unpredictably--that is, randomly But this does not mean “equal probabilities”--for example, Blue probably shouldn’t choose the first column very often “Red” is the maximizing player, and “Blue” is the minimizing player

15 Approximating an optimal strategy Finding the optimal strategy involves some complex math However, we can use a randomized algorithm to approximate this strategy Start by assuming each player has chosen each strategy once According to these odds (1:1:1:1), randomly choose a move for Red--for example, row 0 ( 25, 3, -14, -8 ) The best move for Blue would have been column 2 ( -14 ); so change Blue’s strategy by adding one to the odds for that column, giving (1:1:2:1) According to these odds (1:1:2:1), randomly choose a move for Blue--for example, column 1 ( 3, 12, -8, 0 ) The best move for Red would have been row 1 ( 12 ), so change Red’s odds to (1:2:1:1) Continue in this fashion for a few hundred moves Red Blue

16 Example of random play Red Blue Start with odds (1:1:1:1) for Red and (1:1:1:1) for Blue Red randomly chooses 0, so Blue chooses 2 Blue randomly chooses 1, so Red chooses 1 Red randomly chooses 1, so Blue chooses 0 Blue randomly chooses 1, so Red chooses 1 Red randomly chooses 1, so Blue chooses 0 Blue randomly chooses 2, so Red chooses 3 Best strategy so far: Red, (1:3:1:2), Blue, (3:1:2:1)

17 The 0-1 knapsack problem Even if we don’t use a randomized algorithm to find a solution, we might use one to improve a solution The 0-1 knapsack problem can be expressed as follows: A thief breaks into a house, carrying a knapsack... He can carry up to 25 pounds of loot He has to choose which of N items to steal Each item has some weight and some value “0-1” because each item is stolen (1) or not stolen (0) He has to select the items to steal in order to maximize the value of his loot, but cannot exceed 25 pounds A greedy algorithm does not find an optimal solution…but… We could use a greedy algorithm to find an initial solution, then use a randomized algorithm to try to improve that solution

18 Improving the knapsack solution We can employ a greedy algorithm to fill the knapsack Then-- Remove a randomly chosen item from the knapsack Replace it with other items (possibly using the same greedy algorithm to choose those items) If the result is a better solution, keep it, else go back to the previous solution Repeat this procedure as many times as desired You might, for example, repeat it until there have been no improvements in the last k trials, for some value of k You probably won’t get an optimal solution this way, but you might get a better one than you started with

19 Queueing problems Suppose: Customers arrive at a service desk at an average rate of 50 an hour It takes one minute to serve each customer How long, on average, does a customer have to wait in line? If you know queueing theory, you may be able to solve this Otherwise, just write a program to try it out! This kind of program is typically called a Monte Carlo method, and is a great way to avoid learning scads of math

20 Conclusions Sometimes you want random numbers for their own sake—for example, you want to shuffle a virtual deck of cards to play a card game Sometimes you can use random numbers in a simulation (such as the queue example) to avoid difficult mathematical problems, or when there is no known feasible algorithm (such as the 0-1 knapsack problem) Randomized algorithms are basically experimental—you almost certainly won’t get perfect or optimal results, but you can get “pretty good” results Typically, the longer you allow a randomized algorithm to run, the better your results A randomized algorithm is what you do when you don’t know what else to do As such, it should be in every programmer’s toolbox!

21 The End