Presentation is loading. Please wait.

Presentation is loading. Please wait.

SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.

Similar presentations


Presentation on theme: "SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program."— Presentation transcript:

1 SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program Director, Grids and clusters San Diego Supercomputer Center September 2003

2 SoCal Infrastructure UCSD Heavy Lifters Greg Hidley, School of Engineering, Director of Cal-(IT) 2 Technology Infrastructure Mason Katz, SDSC, Cluster Development Group Leader David Hutches, School of Engineering Ted O’Connell, School of Engineering Max Okumoto, School of Engineering

3 SoCal Infrastructure Year 1 Mod-0, UCSD

4 SoCal Infrastructure Building an Experimental Apparatus Mod-0 Optiputer Ethernet (Packet) Based –Focused as an Immediately-usable High-bandwidth Distributed Platform –Multiple Sites on Campus ( a Few Fiber Miles ) –Next-generation Highly-scalable Optical Chiaro Router at Center of Network Hardware Balancing Act –Experiments Really Require Large Data Generators and Consumers –Science Drivers Require Significant Bandwidth to Storage –OptIPuter Predicated on Price/performance curves of > 1GE networks System Issues –How does one Build and Manage a Reconfigurable Distributed Instrument?

5 SoCal Infrastructure Raw Hardware Center of UCSD Network is a Chiaro Internet Router –Unique Optical Cross Connect Scales to 6.4 Tbit/sec Today –We Have the 640 Gigabit “Starter” System –Has “Unlimited” Bandwidth from our Perspective –Programmable Network Processors –Supports Multiple Routing Instances (Virtual Cut-through) –“Wild West” OptIPuter-routed (Campus) –High-performance Research in Metro (CalREN-HPR) and Wide-area –Interface to Campus Production Network with Appropriate Protections Endpoints are Commodity Clusters –Clustered Commodity-based CPUs, Linux. GigE on Every Node. –Differentiated as Storage vs. Compute vs. Visualization –> $800K of Donated Equipment From Sun And IBM –128 Node (256 Gbit/s) Intel-based Cluster from Sun (Delivered 2 Weeks ago) –48 Node (96 Gbit/s), 21TB (~300 Spindles) Storage Cluster from IBM (in Process) –SIO VIZ Cluster Purchased by Project

6 SoCal Infrastructure Raw Campus Fiber Plant: First Find the Conduit

7 SoCal Infrastructure Storewidth Investigations: General Model DAV Local Cluster Interconnect Parallel Pipes, Large Bisection, Unified Name Space Viz, Compute or other Clustered Endpoint Storage Cluster with Multiple network and drive pipes Large Virtual Disk (Multiple Network Pipes) httpd pvfs httpd pvfs httpd pvfs httpd pvfs httpd pvfs Local Cluster Interconnect httpd pvfs httpd pvfs httpd pvfs httpd pvfs httpd pvfs Local Cluster InterconnectAggregation Switch Chiaro Aggregation Switch Symmetric “Storage Service” 1.6 Gbit/s (200 MB/s) - 6 clients & servers (HTTP, 1GB file) 1.1 Gbit/s (140 MB/s) - 7 clients & servers (davFS, 1GB file) baseline

8 SoCal Infrastructure Year 2 – Mod-0, UCSD

9 SoCal Infrastructure Southern Cal Metro Extension Year 2

10 SoCal Infrastructure Aggregates Year 1 (Network Build) –Chiaro Router Purchased, Installed, Working (Feb) –5 sites on Campus. Each with 4 GigE Uplinks to Chiaro –Private Fiber, UCSD-only. –~40 Individual nodes, Most Shared with Other Projects –Endpoint resource poor. Network Rich Year 2 (Endpoint Enhancements) –Chiaro Router – Additional Line Cards, IPV6, Starting 10GigE Deployment –8 Sites on Campus –h 3 Metro Sites –Multiple Virtual Routers for Connection to Campus, CENIC HPR, others –> 200 Nodes. Most are Donated (Sun and IBM). Most Dedicated to OptIPuter –Infiniband Test Network on 16 nodes + Direct IB Switch to GigE –Enough Resource to Support Data-intensive Activity, –Slightly network poor. Year 3 + (Balanced Expansion Driven by Research Requirements) –Expand 10GigE deployments –Bring Network, Endpoint, and DWDM (Mod-1) Forward Together –Aggregate at Least a Terabit (both Network and Endpoints) by Year 5

11 SoCal Infrastructure Managing a few hundred endpoints Rocks Toolkit used on over 130 Registered Clusters. Several Top500 Clusters –Descriptions Easily Express Different System Configurations –Support IA32 and IA64. Opteron in Progress OptIPuter is Extending the Base Software –Integrate Experimental Protocols/Kernels/Middleware into stack –Build Visualization and Storage Endpoints –Adding Common Grid (NMI) Services through Collaboration with GEON/BIRN


Download ppt "SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program."

Similar presentations


Ads by Google