Presentation is loading. Please wait.

Presentation is loading. Please wait.

NOW 1 Berkeley NOW Project David E. Culler Sun Visit May 1, 1998.

Similar presentations


Presentation on theme: "NOW 1 Berkeley NOW Project David E. Culler Sun Visit May 1, 1998."— Presentation transcript:

1 NOW 1 Berkeley NOW Project David E. Culler culler@cs.berkeley.edu http://now.cs.berkeley.edu/ Sun Visit May 1, 1998

2 NOW 2 Project Goals Make a fundamental change in how we design and construct large-scale systems –market reality: »50%/year performance growth => cannot allow 1-2 year engineering lag –technological opportunity: »single-chip “Killer Switch” => fast, scalable communication Highly integrated building-wide system Explore novel system design concepts in this new “cluster” paradigm

3 NOW 3 Remember the “Killer Micro” Technology change in all markets At many levels: Arch, Compiler, OS, Application Linpack Peak Performance

4 NOW 4 Another Technological Revolution The “Killer Switch” –single chip building block for scalable networks –high bandwidth –low latency –very reliable »if it’s not unplugged => System Area Networks

5 NOW 5 One Example: Myrinet 8 bidirectional ports of 160 MB/s each way < 500 ns routing delay Simple - just moves the bits Detects connectivity and deadlock Tomorrow: gigabit Ethernet?

6 NOW 6 Potential: Snap together large systems incremental scalability time / cost to market independent failure => availability Engineering Lag Time Node Performance in Large System

7 NOW 7 Opportunity: Rethink O.S. Design Remote memory and processor are closer to you than your own disks! Networking Stacks ? Virtual Memory ? File system design ?

8 NOW 8 Example: Traditional File System Clients Server $$$ Global Shared File Cache RAID Disk Storage Fast Channel (HPPI) Expensive Complex Non-Scalable Single point of failure $ Local Private File Cache $$ ° ° ° Bottleneck Server resources at a premium Client resources poorly utilized

9 NOW 9 Truly Distributed File System VM: page to remote memory File Cache P File Cache P File Cache P File Cache P File Cache P File Cache P File Cache P File Cache P Scalable Low-Latency Communication Network Network RAID striping G = Node Comm BW / Disk BW Local Cache Cluster Caching

10 NOW 10 Fast Communication Challenge Fast processors and fast networks The time is spent in crossing between them Killer Switch ° ° ° Network Interface Hardware Comm.. Software Network Interface Hardware Comm. Software Network Interface Hardware Comm. Software Network Interface Hardware Comm. Software Killer Platform ns µs ms

11 NOW 11 Opening: Intelligent Network Interfaces Dedicated Processing power and storage embedded in the Network Interface An I/O card today Tomorrow on chip? $ P M I/O bus (S-Bus) 50 MB/s Mryicom Net P Sun Ultra 170 Myricom NIC 160 MB/s M $ P M P $ P M $ P $ P M

12 NOW 12 Our Attack: Active Messages Request / Reply small active messages (RPC) Bulk-Transfer (store & get) Highly optimized communication layer on a range of HW Request handler Reply

13 NOW 13 NOW System Architecture Net Inter. HW UNIX Workstation Comm. SW Net Inter. HW Comm. SW Net Inter. HW Comm. SW Net Inter. HW Comm. SW Global Layer UNIX Resource Management Network RAM Distributed Files Process Migration Fast Commercial Switch (Myrinet) UNIX Workstation UNIX Workstation UNIX Workstation Large Seq. Apps Parallel Apps Sockets, Split-C, MPI, HPF, vSM

14 NOW 14 Outline Introduction to the NOW project Quick tour of the NOW lab Important new system design concepts Conclusions Future Directions

15 NOW 15 First HP/fddi Prototype FDDI on the HP/735 graphics bus. First fast msg layer on non-reliable network

16 NOW 16 SparcStation ATM NOW ATM was going to take over the world. The original INKTOMI Today: www.hotbot.com

17 NOW 17 100 node Ultra/Myrinet NOW

18 NOW 18 Massive Cheap Storage Basic unit: 2 PCs double-ending four SCSI chains Currently serving Fine Art at http://www.thinker.org/imagebase/

19 NOW 19 Cluster of SMPs (CLUMPS) Four Sun E5000s –8 processors –3 Myricom NICs Multiprocessor, Multi- NIC, Multi-Protocol

20 NOW 20 Information Servers Basic Storage Unit: – Ultra 2, 300 GB raid, 800 GB tape stacker, ATM –scalable backup/restore Dedicated Info Servers –web, –security, –mail, … VLANs project into dept.

21 NOW 21 What’s Different about Clusters? Commodity parts? Communications Packaging? Incremental Scalability? Independent Failure? Intelligent Network Interfaces? Complete System on every node –virtual memory –scheduler –files –...

22 NOW 22 Three important system design aspects Virtual Networks Implicit co-scheduling Scalable File Transfer

23 NOW 23 Communication Performance  Direct Network Access LogP: Latency, Overhead, and Bandwidth Active Messages: lean layer supporting programming models Latency1/BW

24 NOW 24 Example: NAS Parallel Benchmarks Better node performance than the Cray T3D Better scalability than the IBM SP-2

25 NOW 25 General purpose requirements Many timeshared processes –each with direct, protected access User and system Client/Server, Parallel clients, parallel servers –they grow, shrink, handle node failures Multiple packages in a process –each may have own internal communication layer Use communication as easily as memory

26 NOW 26 Virtual Networks Endpoint abstracts the notion of “attached to the network” Virtual network is a collection of endpoints that can name each other. Many processes on a node can each have many endpoints, each with own protection domain.

27 NOW 27 Process 3 How are they managed? How do you get direct hardware access for performance with a large space of logical resources? Just like virtual memory –active portion of large logical space is bound to physical resources Process n Process 2 Process 1 *** Host Memory Processor NIC Mem Network Interface P

28 NOW 28 Solaris System Abstractions Segment Driver manages portions of an address space Device Driver manages I/O device Virtual Network Driver

29 NOW 29 Virtualization is not expensive

30 NOW 30 Bursty Communication among many virtual networks Client Server Msg burst work

31 NOW 31 Sustain high BW with many VN

32 NOW 32 Perspective on Virtual Networks Networking abstractions are vertical stacks –new function => new layer –poke through for performance Virtual Networks provide a horizontal abstraction –basis for build new, fast services

33 NOW 33 Beyond the Personal Supercomputer Able to timeshare parallel programs –with fast, protected communication Mix with sequential and interactive jobs Use fast communication in OS subsystems –parallel file system, network virtual memory, … Nodes have powerful, local OS scheduler Problem: local schedulers do not know to run parallel jobs in parallel

34 NOW 34 Local Scheduling Local Schedulers act independently –no global control Program waits while trying communicate with peers that are not running 10 - 100x slowdowns for fine-grain programs! => need coordinated scheduling

35 NOW 35 Traditional Solution: Gang Scheduling Global context switch according to precomputed schedule Inflexible, inefficient, fault prone

36 NOW 36 Novel Solution: Implicit Coscheduling Coordinate schedulers using only the communication in the program –very easy to build –potentially very robust to component failures –inherently “service on-demand” –scalable Local service component can evolve. A LS A GS A LS GS A LS A GS LS A GS

37 NOW 37 Why it works Infer non-local state from local observations React to maintain coordination observationimplication action fast response partner scheduledspin delayed response partner not scheduledblock WS 1 Job A WS 2 Job BJob A WS 3 Job BJob A WS 4 Job BJob A sleep spin requestresponse

38 NOW 38 Example: Synthetic Pgms Range of granularity and load imbalance –spin wait 10x slowdown

39 NOW 39 Implicit Coordination Surprisingly effective –real programs –range of workloads –simple an robust Opens many new research questions –fairness How broadly can implicit coordination be applied in the design of cluster subsystems?

40 NOW 40 A look at Serious File I/O Traditional I/O system NOW I/O system Benchmark Problem: sort large number of 100 byte records with 10 byte keys –start on disk, end on disk –accessible as files (use the file system) –Datamation sort: 1 million records –Minute sort: quantity in a minute Proc- Mem P-M

41 NOW 41 World-Record Disk-to-Disk Sort Sustain 500 MB/s disk bandwidth and 1,000 MB/s network bandwidth

42 NOW 42 Key Implementation Techniques Performance Isolation: highly tuned local disk-to-disk sort –manage local memory –manage disk striping –memory mapped I/O with m-advise, buffering –manage overlap with threads Efficient Communication –completely hidden under disk I/O –competes for I/O bus bandwidth Self-tuning Software –probe available memory, disk bandwidth, trade-offs

43 NOW 43 Towards a Cluster File System Remote disk system built on a virtual network Client RDlib RD server Active msgs

44 NOW 44 Conclusions Fast, simple Cluster Area Networks are a technological breakthrough Complete system on every node makes clusters a very powerful architecture. Extend the system globally –virtual memory systems, –schedulers, –file systems,... Efficient communication enables new solutions to classic systems challenges.

45 NOW 45 Millennium Computational Community Gigabit Ethernet SIMS C.S. E.E. M.E. BMRC N.E. IEOR C. E. MSME NERSC Transport Business Chemistry Astro Physics Biology Economy Math

46 NOW 46 Millennium PC Clumps Inexpensive, easy to manage Cluster Replicated in many departments Prototype for very large PC cluster

47 NOW 47 Proactive Infrastructure Scalable Servers Stationary desktops Information appliances


Download ppt "NOW 1 Berkeley NOW Project David E. Culler Sun Visit May 1, 1998."

Similar presentations


Ads by Google