Presentation is loading. Please wait.

Presentation is loading. Please wait.

Concurrent Systems Parallelism. Final Exam Schedule CS1311 Sections L/M/N Tuesday/Thursday 10:00 A.M. Exam Scheduled for 8:00 Friday May 5, 2000 Physics.

Similar presentations


Presentation on theme: "Concurrent Systems Parallelism. Final Exam Schedule CS1311 Sections L/M/N Tuesday/Thursday 10:00 A.M. Exam Scheduled for 8:00 Friday May 5, 2000 Physics."— Presentation transcript:

1 Concurrent Systems Parallelism

2 Final Exam Schedule CS1311 Sections L/M/N Tuesday/Thursday 10:00 A.M. Exam Scheduled for 8:00 Friday May 5, 2000 Physics L1

3 Final Exam Schedule CS1311 Sections E/F Tuesday/Thursday 2:00 P.M. Exam Scheduled for 2:50 Wednesday May 3, 2000 Physics L1

4 Concurrent Systems

5 Sequential Processing All of the algorithms we’ve seen so far are sequential: –They have one “thread” of execution –One step follows another in sequence –One processor is all that is needed to run the algorithm

6 A Non-sequential Example Consider a house with a burglar alarm system. The system continually monitors: –The front door –The back door –The sliding glass door –The door to the deck –The kitchen windows –The living room windows –The bedroom windows The burglar alarm is watching all of these “at once” (at the same time).

7 Another Non-sequential Example Your car has an onboard digital dashboard that simultaneously: –Calculates how fast you’re going and displays it on the speedometer –Checks your oil level –Checks your fuel level and calculates consumption –Monitors the heat of the engine and turns on a light if it is too hot –Monitors your alternator to make sure it is charging your battery

8 Concurrent Systems A system in which: –Multiple tasks can be executed at the same time –The tasks may be duplicates of each other, or distinct tasks –The overall time to perform the series of tasks is reduced

9 Advantages of Concurrency Concurrent processes can reduce duplication in code. The overall runtime of the algorithm can be significantly reduced. More real-world problems can be solved than with sequential algorithms alone. Redundancy can make systems more reliable.

10 Disadvantages of Concurrency Runtime is not always reduced, so careful planning is required Concurrent algorithms can be more complex than sequential algorithms Shared data can be corrupted Communications between tasks is needed

11 Achieving Concurrency CPU 1 CPU 2 Memory bus Many computers today have more than one processor (multiprocessor machines)

12 Achieving Concurrency CPU task 1 task 2 task 3 ZZZZ Concurrency can also be achieved on a computer with only one processor: –The computer “juggles” jobs, swapping its attention to each in turn –“Time slicing” allows many users to get CPU resources –Tasks may be suspended while they wait for something, such as device I/O

13 Concurrency vs. Parallelism Concurrency is the execution of multiple tasks at the same time, regardless of the number of processors. Parallelism is the execution of multiple processors on the same task.

14 Types of Concurrent Systems Multiprogramming Multiprocessing Multitasking Distributed Systems

15 Multiprogramming Share a single CPU among many users or tasks. May have a time-shared algorithm or a priority algorithm for determining which task to run next Give the illusion of simultaneous processing through rapid swapping of tasks (interleaving).

16 Multiprogramming Memory User 1 User 2 CPU User1 User2

17 Multiprogramming CPU’s Tasks/Users

18 Multiprocessing Executes multiple tasks at the same time Uses multiple processors to accomplish the tasks Each processor may also timeshare among several tasks Has a shared memory that is used by all the tasks

19 Multiprocessing Memory User 1: Task1 User 1: Task2 User 2: Task1 CPU User1 User2 CPU

20 Multiprocessing CPU’s Tasks/Users Shared Memory

21 Multitasking A single user can have multiple tasks running at the same time. Can be done with one or more processors. Used to be rare and for only expensive multiprocessing systems, but now most modern operating systems can do it.

22 Multitasking Memory User 1: Task1 User 1: Task2 User 1: Task3 CPU User1

23 Multitasking CPU’s Tasks Single User

24 Distributed Systems Central Bank ATM Buford ATM Perimeter ATM Student Ctr ATM North Ave Multiple computers working together with no central program “in charge.”

25 Distributed Systems Advantages: –No bottlenecks from sharing processors –No central point of failure –Processing can be localized for efficiency Disadvantages: –Complexity –Communication overhead –Distributed control

26 Questions?

27

28 Parallelism

29 Using multiple processors to solve a single task. Involves: –Breaking the task into meaningful pieces –Doing the work on many processors –Coordinating and putting the pieces back together.

30 Parallelism CPU Memory Network Interface

31 Parallelism CPU’s Tasks

32 Pipeline Processing Repeating a sequence of operations or pieces of a task. Allocating each piece to a separate processor and chaining them together produces a pipeline, completing tasks faster. ABCD input output

33 Example Suppose you have a choice between a washer and a dryer each having a 30 minutes cycle or A washer/dryer with a one hour cycle The correct answer depends on how much work you have to do.

34 One Load washdry combo Transfer Overhead

35 Three Loads washdry combo wash dry combo

36 Examples of Pipelined Tasks Automobile manufacturing Instruction processing within a computer A B C D time

37 Task Queues P1P2P3Pn Super Task Queue A supervisor processor maintains a queue of tasks to be performed in shared memory. Each processor queries the queue, dequeues the next task and performs it. Task execution may involve adding more tasks to the task queue.

38 Parallelizing Algorithms How much gain can we get from parallelizing an algorithm?

39 Parallel Bubblesort We can use N/2 processors to do all the comparisons at once, “flopping” the pair-wise comparisons.

40 Runtime of Parallel Bubblesort

41 Completion Time of Bubblesort Sequential bubblesort finishes in N 2 time. Parallel bubblesort finishes in N time. Bubble Sort parallel O(N 2 ) O(N)

42 Product Complexity Got done in O(N) time, better than O(N 2 ) Each time “chunk” does O(N) work There are N time chunks. Thus, the amount of work is still O(N 2 ) Product complexity is the amount of work per “time chunk” multiplied by the number of “time chunks” – the total work done.

43 Ceiling of Improvement Parallelization can reduce time, but it cannot reduce work. The product complexity cannot change or improve. How much improvement can parallelization provide? –Given an O(NLogN) algorithm and Log N processors, the algorithm will take at least O(?) time. –Given an O(N 3 ) algorithm and N processors, the algorithm will take at least O(?) time. O(N) time. O(N 2 ) time.

44 Number of Processors Processors are limited by hardware. Typically, the number of processors is a power of 2 Usually: The number of processors is a constant factor, 2 K Conceivably: Networked computers joined as needed (ala Borg?).

45 Adding Processors A program on one processor –Runs in X time Adding another processor –Runs in no more than X/2 time –Realistically, it will run in X/2 +  time because of overhead At some point, adding processors will not help and could degrade performance.

46 Overhead of Parallelization Parallelization is not free. Processors must be controlled and coordinated. We need a way to govern which processor does what work; this involves extra work. Often the program must be written in a special programming language for parallel systems. Often, a parallelized program for one machine (with, say, 2 K processors) doesn’t work on other machines (with, say, 2 L processors).

47 What We Know about Tasks Relatively isolated units of computation Should be roughly equal in duration Duration of the unit of work must be much greater than overhead time Policy decisions and coordination required for shared data Simpler algorithm are the easiest to parallelize

48 Questions?

49 More?

50 Matrix Multiplication

51 Inner Product Procedure Procedure inner_prod(a, b, c isoftype in/out Matrix, i, j isoftype in Num) // Compute inner product of a[i][*] and b[*][j] Sum isoftype Num k isoftype Num Sum <- 0 k <- 1 loop exitif(k > n) sum <- sum + a[i][k] * b[k][j] k < k + 1 endloop endprocedure // inner_prod

52 Matrix definesa Array[1..N][1..N] of Num N is // Declare constant defining size // of arrays Algorithm P_Demo a, b, c isoftype Matrix Shared server isoftype Num Initialize(NUM_SERVERS) // Input a and b here // (code not shown) i, j isoftype Num

53 i <- 1 loop exitif(i > N) server <- (i * NUM_SERVERS) DIV N j <- 1 loop exitif(j > N) RThread(server, inner_prod(a, b, c, i, j )) j <- j + 1 endloop i <- i + 1 endloop Parallel_Wait(NUM_SERVERS) // Output c here endalgorithm // P_Demo

54 Questions?

55


Download ppt "Concurrent Systems Parallelism. Final Exam Schedule CS1311 Sections L/M/N Tuesday/Thursday 10:00 A.M. Exam Scheduled for 8:00 Friday May 5, 2000 Physics."

Similar presentations


Ads by Google