Presentation is loading. Please wait.

Presentation is loading. Please wait.

Parallel Computers Prof. Sin-Min Lee Department of Computer Science.

Similar presentations


Presentation on theme: "Parallel Computers Prof. Sin-Min Lee Department of Computer Science."— Presentation transcript:

1 Parallel Computers Prof. Sin-Min Lee Department of Computer Science

2 Uniprocessor Systems Improve performance: Allowing multiple, simultaneous memory access Allowing multiple, simultaneous memory access - requires multiple address, data, and control buses (one set for each simultaneous memory access) (one set for each simultaneous memory access) - The memory chip has to be able to handle multiple transfers simultaneously transfers simultaneously

3 Uniprocessor Systems Multiport Memory: Has two sets of address, data, and control pins to allow simultaneous data transfers to occur Has two sets of address, data, and control pins to allow simultaneous data transfers to occur CPU and DMA controller can transfer data concurrently CPU and DMA controller can transfer data concurrently A system with more than one CPU could handle simultaneous requests from two different processors A system with more than one CPU could handle simultaneous requests from two different processors

4 Uniprocessor Systems Multiport Memory (cont.): Can - Multiport memory can handle two requests to read data from the same location at the same time Cannot - Process two simultaneous requests to write data to the same memory location - Requests to read from and write to the same memory location simultaneously

5 Multiprocessors I/O Port Device Controller CPU Bus Memory CPU

6

7

8

9 Multiprocessors Systems designed to have 2 to 8 CPUs Systems designed to have 2 to 8 CPUs The CPUs all share the other parts of the computer The CPUs all share the other parts of the computer Memory Memory Disk Disk System Bus System Bus etc etc CPUs communicate via Memory and the System Bus CPUs communicate via Memory and the System Bus

10 MultiProcessors Each CPU shares memory, disks, etc Each CPU shares memory, disks, etc Cheaper than clusters Cheaper than clusters Not as good performance as clusters Not as good performance as clusters Often used for Often used for Small Servers Small Servers High-end Workstations High-end Workstations

11 MultiProcessors OS automatically shares work among available CPUs OS automatically shares work among available CPUs On a workstation… On a workstation… One CPU can be running an engineering design program One CPU can be running an engineering design program Another CPU can be doing complex graphics formatting Another CPU can be doing complex graphics formatting

12

13 Applications of Parallel Computers Traditionally: government labs, numerically intensive applications Traditionally: government labs, numerically intensive applications Research Institutions Research Institutions Recent Growth in Industrial Applications Recent Growth in Industrial Applications 236 of the top 500 236 of the top 500 Financial analysis, drug design and analysis, oil exploration, aerospace and automotive Financial analysis, drug design and analysis, oil exploration, aerospace and automotive

14 1966 Flynn’s Classification Michael Flynn, Professor of Stanford University

15

16 Multiprocessor Systems Flynn’s Classification Single instruction multiple data (SIMD): Main Memory Control Unit Processor Memory Communications Network Executes a single instruction on multiple data values simultaneously using many processors Executes a single instruction on multiple data values simultaneously using many processors Since only one instruction is processed at any given time, it is not necessary for each processor to fetch and decode the instruction Since only one instruction is processed at any given time, it is not necessary for each processor to fetch and decode the instruction This task is handled by a single control unit that sends the control signals to each processor. This task is handled by a single control unit that sends the control signals to each processor. Example: Array processor Example: Array processor

17 Why Multiprocessors? 1. Microprocessors as the fastest CPUs Collecting several much easier than redesigning 1 Collecting several much easier than redesigning 1 2. Complexity of current microprocessors Do we have enough ideas to sustain 1.5X/yr? Do we have enough ideas to sustain 1.5X/yr? Can we deliver such complexity on schedule? Can we deliver such complexity on schedule? 3. Slow (but steady) improvement in parallel software (scientific apps, databases, OS) 4. Emergence of embedded and server markets driving microprocessors in addition to desktops Embedded functional parallelism, producer/consumer model Embedded functional parallelism, producer/consumer model Server figure of merit is tasks per hour vs. latency Server figure of merit is tasks per hour vs. latency

18 Parallel Processing Intro Long term goal of the field: scale number processors to size of budget, desired performance Long term goal of the field: scale number processors to size of budget, desired performance Machines today: Sun Enterprise 10000 (8/00) Machines today: Sun Enterprise 10000 (8/00) 64 400 MHz UltraSPARC® II CPUs,64 GB SDRAM memory, 868 18GB disk,tape 64 400 MHz UltraSPARC® II CPUs,64 GB SDRAM memory, 868 18GB disk,tape $4,720,800 total $4,720,800 total 64 CPUs 15%,64 GB DRAM 11%, disks 55%, cabinet 16% ($10,800 per processor or ~0.2% per processor) 64 CPUs 15%,64 GB DRAM 11%, disks 55%, cabinet 16% ($10,800 per processor or ~0.2% per processor) Minimal E10K - 1 CPU, 1 GB DRAM, 0 disks, tape ~$286,700 Minimal E10K - 1 CPU, 1 GB DRAM, 0 disks, tape ~$286,700 $10,800 (4%) per CPU, plus $39,600 board/4 CPUs (~8%/CPU) $10,800 (4%) per CPU, plus $39,600 board/4 CPUs (~8%/CPU) Machines today: Dell Workstation 220 (2/01) Machines today: Dell Workstation 220 (2/01) 866 MHz Intel Pentium® III (in Minitower) 866 MHz Intel Pentium® III (in Minitower) 0.125 GB RDRAM memory, 1 10GB disk, 12X CD, 17” monitor, nVIDIA GeForce 2 GTS,32MB DDR Graphics card, 1yr service 0.125 GB RDRAM memory, 1 10GB disk, 12X CD, 17” monitor, nVIDIA GeForce 2 GTS,32MB DDR Graphics card, 1yr service $1,600; for extra processor, add $350 (~20%) $1,600; for extra processor, add $350 (~20%)

19 Major MIMD Styles 1. Centralized shared memory ("Uniform Memory Access" time or "Shared Memory Processor") 2. Decentralized memory (memory module with CPU) get more memory bandwidth, lower memory latency get more memory bandwidth, lower memory latency Drawback: Longer communication latency Drawback: Longer communication latency Drawback: Software model more complex Drawback: Software model more complex

20

21 Multiprocessor Systems Flynn’s Classification

22 Four Categories of Flynn ’ s Classification: SISDSingle instruction single data SISDSingle instruction single data SIMDSingle instruction multiple data SIMDSingle instruction multiple data MISDMultiple instruction single data ** MISDMultiple instruction single data ** MIMDMultiple instruction multiple data MIMDMultiple instruction multiple data ** The MISD classification is not practical to implement. In fact, no significant MISD computers have ever been build. It is included only for completeness.

23 MIMD computers usually have a different program running on every processor. This makes for a very complex programming environment. What processor? Doing which task? At what time? What’s doing what when?

24

25 Memory latency The time between issuing a memory fetch and receiving the response. Simply put, if execution proceeds before the memory request responds, unexpected results will occur. What values are being used? Not the ones requested!

26 A similar problem can occur with instruction executions themselves. Synchronization The need to enforce the ordering of instruction executions according to their data dependencies. Instruction b must occur before instruction a.

27 Despite potential problems, MIMD can prove larger than life. MIMD Successes IBM Deep Blue – Computer beats professional chess player. Some may not consider this to be a fair example, because Deep Blue was built to beat Kasparov alone. It “knew” his play style so it could counter is projected moves. Still, Deep Blue’s win marked a major victory for computing.

28 IBM’s latest, a supercomputer that models nuclear explosions. IBM Poughkeepsie built the world’s fastest supercomputer for the U. S. Department of Energy. It’s job was to model nuclear explosions.

29 MIMD – it’s the most complex, fastest, flexible parallel paradigm. It’s beat a world class chess player at his own game. It models things that few people understand. It is parallel processing at its finest.

30 Multiprocessor Systems System Topologies: The topology of a multiprocessor system refers to the pattern of connections between its processors The topology of a multiprocessor system refers to the pattern of connections between its processors Quantified by standard metrics: Quantified by standard metrics: DiameterThe maximum distance between two processors in the computer system DiameterThe maximum distance between two processors in the computer system BandwidthThe capacity of a communications link multiplied by the number of such links in the system (best case) BandwidthThe capacity of a communications link multiplied by the number of such links in the system (best case) Bisectional BandwidthThe total bandwidth of the links connecting the two halves of the processor split so that the number of links between the two halves is minimized (worst case) Bisectional BandwidthThe total bandwidth of the links connecting the two halves of the processor split so that the number of links between the two halves is minimized (worst case)

31

32 Multiprocessor Systems System Topologies Six Categories of System Topologies: Shared bus Ring Tree Mesh Hypercube Completely Connected

33

34

35 Multiprocessor Systems System Topologies Shared bus: The simplest topology The simplest topology Processors communicate with each other exclusively via this bus Processors communicate with each other exclusively via this bus Can handle only one data transmission at a time Can handle only one data transmission at a time Can be easily expanded by connecting additional processors to the shared bus, along with the necessary bus arbitration circuitry Can be easily expanded by connecting additional processors to the shared bus, along with the necessary bus arbitration circuitry Shared Bus Global Memory M P M P M P

36

37

38

39 Multiprocessor Systems System Topologies Ring: Uses direct dedicated connections between processors Uses direct dedicated connections between processors Allows all communication links to be active simultaneously Allows all communication links to be active simultaneously A piece of data may have to travel through several processors to reach its final destination A piece of data may have to travel through several processors to reach its final destination All processors must have two communication links All processors must have two communication links P PP PP P

40 Multiprocessor Systems System Topologies Tree topology: Uses direct connections between processors Uses direct connections between processors Each processor has three connections Each processor has three connections Its primary advantage is its relatively low diameter Its primary advantage is its relatively low diameter Example: DADO Computer Example: DADO Computer P PP P PP P

41

42

43

44 Multiprocessor Systems System Topologies Mesh topology: Every processor connects to the processors above, below, left, and right Every processor connects to the processors above, below, left, and right Left to right and top to bottom wraparound connections may or may not be present Left to right and top to bottom wraparound connections may or may not be present PPP PPP PPP

45

46

47 Multiprocessor Systems System Topologies Hypercube: Multidimensional mesh Multidimensional mesh Has n processors, each with log n connections Has n processors, each with log n connections

48

49

50 Multiprocessor Systems System Topologies Completely Connected: Every processor has n-1 connections, one to each of the other processors The complexity of the processors increases as the system grows Offers maximum communication capabilities

51 Architecture Details Computers  MPPs Computers  MPPs P M World ’ s simplest computer (processor/memory) P M C D Standard computer (add cache,disk) P M C D P M C D P M C D Network

52 A Supercomputer at $5.2 million Virginia Tech 1,100 node Macs. G5 supercomputer

53 The Virginia Polytechnic Institute and State University has built a supercomputer comprised of a cluster of 1,100 dual- processor Macintosh G5 computers. Based on preliminary benchmarks, Big Mac is capable of 8.1 teraflops per second. The Mac supercomputer still is being fine tuned, and the full extent of its computing power will not be known until November. But the 8.1 teraflops figure would make the Big Mac the world's fourth fastest supercomputer

54 Big Mac's cost relative to similar machines is as noteworthy as its performance. The Apple supercomputer was constructed for just over US$5 million, and the cluster was assembled in about four weeks. In contrast, the world's leading supercomputers cost well over $100 million to build and require several years to construct. The Earth Simulator, which clocked in at 38.5 teraflops in 2002, reportedly cost up to $250 million.

55 Srinidhi Varadarajan, Ph.D. Dr. Srinidhi Varadarajan is an Assistant Professor of Computer Science at Virginia Tech. He was honored with the NSF Career Award in 2002 for "Weaving a Code Tapestry: A Compiler Directed Framework for Scalable Network Emulation." He has focused his research on building a distributed network emulation system that can scale to emulate hundreds of thousands of virtual nodes. October 28 2003 Time: 7:30pm - 9:00pm Location: Santa Clara Ballroom

56 Parallel Computers Two common types Two common types Cluster Cluster Multi-Processor Multi-Processor

57 Cluster Computers

58 Clusters on the Rise Using clusters of small machines to build a supercomputer is not a new concept. Another of the world's top machines, housed at the Lawrence Livermore National Laboratory, was constructed from 2,304 Xeon processors. The machine was build by Utah-based Linux Networx.Lawrence Livermore Clustering technology has meant that traditional big-iron leaders like Cray (Nasdaq: CRAY) and IBM have new competition from makers of smaller machines. Dell (Nasdaq: DELL), among other companies, has sold high-powered computing clusters to research institutions.Cray Dell

59 Cluster Computers Each computer in a cluster is a complete computer by itself Each computer in a cluster is a complete computer by itself CPU CPU Memory Memory Disk Disk etc etc Computers communicate with each other via some interconnection bus Computers communicate with each other via some interconnection bus

60 Cluster Computers Typically used where one computer does not have enough capacity to do the expected work Typically used where one computer does not have enough capacity to do the expected work Large Servers Large Servers Cheaper than building one GIANT computer Cheaper than building one GIANT computer

61 Although not new, supercomputing clustering technology still is impressive. It works by farming out chunks of data to individual machines, adding that clustering works better for some types of computing problems than others. For example, a cluster would not be ideal to compete against IBM's Deep Blue supercomputer in a chess match; in this case, all the data must be available to one processor at the same moment -- the machine operates much in the same way as the human brain handles tasks. However, a cluster would be ideal for the processing of seismic data for oil exploration, because that computing job can be divided into many smaller tasks.

62 Cluster Computers Need to break up work among the computers in the cluster Need to break up work among the computers in the cluster Example: Microsoft.com Search Engine Example: Microsoft.com Search Engine 6 computers running SQL Server 6 computers running SQL Server Each has a copy of the MS Knowledge Base Each has a copy of the MS Knowledge Base Search requests come to one computer Search requests come to one computer Sends request to one of the 6 Sends request to one of the 6 Attempts to keep all 6 busy Attempts to keep all 6 busy

63 The Virginia Tech Mac supercomputer should be fully functional and in use by January 2004. It will be used for research into nanoscale electronics, quantum chemistry, computational chemistry, aerodynamics, molecular statics, computational acoustics and the molecular modeling of proteins.

64 Supercomputers in China

65

66

67

68

69

70

71

72

73

74 The previous speed leader is a computer called Cray XT5 Jaguar located at the Oak Ridge National Laboratory of the United States. The previous speed leader is a computer called Cray XT5 Jaguar located at the Oak Ridge National Laboratory of the United States. China has invested billions in computing in recent years, and supercomputers are being pressed into service for everything from designing aeroplanes to probing the origins of the universe. They're also being used all over the world to model climate change scenarios. China has invested billions in computing in recent years, and supercomputers are being pressed into service for everything from designing aeroplanes to probing the origins of the universe. They're also being used all over the world to model climate change scenarios.

75 Tianhe-1A : China ’ s New Supercomputer, Beats Cray XT5 Jaguar of US Tianhe-1A : China ’ s New Supercomputer, Beats Cray XT5 Jaguar of US

76 It contains a massive 7,000 graphics processors and 14,000 Intel chips. but it was Chinese researchers who worked out how to wire them up to create the lightning-fast data transfer and computational power. It contains a massive 7,000 graphics processors and 14,000 Intel chips. but it was Chinese researchers who worked out how to wire them up to create the lightning-fast data transfer and computational power. The system is designed at the National University of Defense Technology in China. This supercompter, based in China’s National Center for Supercomputing, has already started working for the local weather service and the National Offshore Oil Corporation The system is designed at the National University of Defense Technology in China. This supercompter, based in China’s National Center for Supercomputing, has already started working for the local weather service and the National Offshore Oil Corporation

77 The super computer set a performance record by crunching 2.507 petaflops of data at once (two-and-a-half thousand trillion operations per second), which is 40 percent more than the Cray XT5 Jaguar’s speed of 1.75 petaflops. The super computer set a performance record by crunching 2.507 petaflops of data at once (two-and-a-half thousand trillion operations per second), which is 40 percent more than the Cray XT5 Jaguar’s speed of 1.75 petaflops. The Tianhe-1A is twenty-nine million times more powerful than the earliest supercomputers of the 1970s The Tianhe-1A is twenty-nine million times more powerful than the earliest supercomputers of the 1970s

78 Linux operating system Tianhe-1A runs on Linux operating system. While the thousands of individual processors used in the supercomputer are made in America, the switches that connect those computer chips are built by Chinese scientists. The connection and the switches are critical success factor of a super computer, as the faster you make the interconnect, the better your overall computation will flow. Tianhe-1A runs on Linux operating system. While the thousands of individual processors used in the supercomputer are made in America, the switches that connect those computer chips are built by Chinese scientists. The connection and the switches are critical success factor of a super computer, as the faster you make the interconnect, the better your overall computation will flow. Milky Way is a reported 47% faster than the XT5 and does this by uniting its thousands of Intel chips with graphics processors made by rival firm Nvidia. Milky Way is a reported 47% faster than the XT5 and does this by uniting its thousands of Intel chips with graphics processors made by rival firm Nvidia.

79 The super computer consists of 20,000 smaller computers linked together, and covers more than a third of an acre (17,000 square feet). The super computer consists of 20,000 smaller computers linked together, and covers more than a third of an acre (17,000 square feet). It is sized more than 100 fridge-sized computer racks and together these weigh more than 155 tonnes. It is sized more than 100 fridge-sized computer racks and together these weigh more than 155 tonnes. The Tianhe-1A is powered by 7168 Nvidia Tesla M2050 GPUs and 14336 Intel Xeon CPUs. The system consumes 4.04 megawatts of electricity. The Tianhe-1A is powered by 7168 Nvidia Tesla M2050 GPUs and 14336 Intel Xeon CPUs. The system consumes 4.04 megawatts of electricity. Five new supercomputers are being built that are supposed to be four times more powerful than China’s new machine. Three are in the U.S.; two are in Japan. Five new supercomputers are being built that are supposed to be four times more powerful than China’s new machine. Three are in the U.S.; two are in Japan.

80 Luckily the Tianhe-1A can. The fact that China currently owns the world's fastest supercomputer is not really relevant to an understanding of the international league tables of computing power. It almost goes without saying that in 18 months time even 2.507 petaflops will look like pocket calculator stuff. The real story here is that China's unprecedented level of investment in supercomputing is resulting in huge numbers of software engineers coming out of the country. It is not the Tianhe-1A that spells the future of computing dominance, but the legions of computing experts of the future. The fact that China currently owns the world's fastest supercomputer is not really relevant to an understanding of the international league tables of computing power. It almost goes without saying that in 18 months time even 2.507 petaflops will look like pocket calculator stuff. The real story here is that China's unprecedented level of investment in supercomputing is resulting in huge numbers of software engineers coming out of the country. It is not the Tianhe-1A that spells the future of computing dominance, but the legions of computing experts of the future.


Download ppt "Parallel Computers Prof. Sin-Min Lee Department of Computer Science."

Similar presentations


Ads by Google