Presentation is loading. Please wait.

Presentation is loading. Please wait.

Year End Report on the NSF OptIPuter ITR Project NSF ANIR Division Arlington, VA December 12, 2002 Dr. Larry Smarr Director, California Institute for Telecommunications.

Similar presentations

Presentation on theme: "Year End Report on the NSF OptIPuter ITR Project NSF ANIR Division Arlington, VA December 12, 2002 Dr. Larry Smarr Director, California Institute for Telecommunications."— Presentation transcript:

1 Year End Report on the NSF OptIPuter ITR Project NSF ANIR Division Arlington, VA December 12, 2002 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technologies Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD

2 The Move to Data-Intensive Science & Engineering- e-Science Community Resources ATLAS Sloan Digital Sky Survey LHC ALMA

3 CONTROLPLANECONTROLPLANE Clusters Dynamically Allocated Lightpaths Switch Fabrics Physical Monitoring Apps Middleware A LambdaGrid Will Be the Backbone for an e-Science Network Source: Joe Mambretti, NU

4 Just Like in Computing -- Different FLOPS for Different Folks DSLGigE LAN C A B A -> Need Full Internet Routing B -> Need VPN Services On/And Full Internet Routing C -> Need Very Fat Pipes, Limited Multiple Virtual Organizations Source: Cees Delaat Number of users Bandwidth consumed

5 ResponseThe OptIPuter Project

6 OptIPuter NSF Proposal Partnered with National Experts and Infrastructure Vancouver Seattle Portland San Francisco Los Angeles San Diego (SDSC) NCSA SURFnet CERN CA* net4 Asia Pacific Asia Pacific AMPATH PSC Atlanta CA*net4 Source: Tom DeFanti and Maxine Brown, UIC NYC TeraGrid DTFnet CENIC Pacific Light Rail Chicago UIC NU USC UCSD, SDSU UCI

7 The OptIPuter Is an Experimental Network Project Construction of a DWDM-Based WAN Testbed –Capability To Configure Lambdas In Real-time –Scalable Linux Clusters –Large Data Resources Distributed Visualization and Collaboration Applications for a Bandwidth-Rich Environment Leading Edge Applications Drivers, Each Requiring On-Line Visualization Of Terabytes Of Data –Neurosciences Data Analysis –Earth Sciences Data Analysis Software Research –LambdaGrid Software Stack –Rethink TCP/IP Protocols –Enhance Security Mechanisms Source: Andrew Chien, UCSD

8 The OptIPuter Frontier Advisory Board Optical Component Research –Shaya Fainman, UCSD –Sadik Esener, UCSD –Alan Willner, USC –Frank Shi, UCI –Joe Ford, UCSD Optical Networking Systems –George Papen, UCSD –Joe Mambretti, Northwestern University –Steve Wallach, Chiaro Networks, Ltd. –George Clapp, Telcordia/SAIC –Tom West, CENIC Data and Storage –Yannis Papakonstantinou, UCSD –Paul Siegel, UCSD Clusters, Grid, and Computing –Alan Benner, IBM eServer Group, Systems Architecture and Performance department –Fran Berman, SDSC director –Ian Foster, Argonne National Laboratory Generalists –Franz Birkner, FXB Ventures and San Diego Telecom Council –Forest Baskett, Venture Partner with New Enterprise Associates –Mohan Trivedi, UCSD First Meeting February 6-7, 2003

9 The First OptIPuter Workshop on Optical Switch Products Hosted by Calit2 @ UCSD –October 25, 2002 –Organized by Maxine Brown (UIC) and Greg Hidley (UCSD) –Full Day Open Presentations by Vendors and OptIPuter Team Examined Variety of Technology Offerings: –OEOEO –TeraBurst Networks –OEO –Chiaro Networks –OOO –Glimmerglass –Calient –IMMI

10 Coherence DRAM - 4 GB - HIGHLY INTERLEAVED MULTI-LAMBDA Optical Network VLIW/RISC CORE 40 GFLOPS 10 GHz 240 GB/s 24 Bytes wide 240 GB/s 24 Bytes wide VLIW/RISC CORE 40 GFLOPS 10 GHz... 2nd LEVEL CACHE 8 MB 2nd LEVEL CACHE 8 MB CROSS BAR DRAM – 16 GB 64/256 MB - HIGHLY INTERLEAVED 640GB/s OptIPuter Inspiration--Node of a 2009 PetaFLOPS Supercomputer Updated From Steve Wallach, Supercomputing 2000 Keynote 5 Terabits/s

11 Global Architecture of a 2009 COTS PetaFLOPS System I/O ALL-OPTICAL SWITCH Multi-Die Multi-Processor 1 2 3 64 63 49 48 4 5 16 17 18 32 33 47 46 128 Die/Box 4 CPU/Die 10meters= 50 nanosec Delay... LAN/WAN Source: Steve Wallach, Supercomputing 2000 Keynote

12 Supercomputer Design 2010 Semi-Conductor & System Trends Billions Of Transistors –Multiple Processors On A Die –On Board Cache And Dram Memory (PIM) –Latency To Memory Scales With Clock (Same Die) System Characteristics –Speed Of Light Becomes Limiting Factor In The Latency For Large Systems –c Does Not Scale With Lithography –Systems Become GRID Enabled Source: Steve Wallach, Chiaro Networks

13 WAN & LAN Bandwidth Are Converging Source: Steve Wallach, Chiaro Networks

14 Convergence of Networking Fabrics Today's Computer Room –Router For External Communications (WAN) –Ethernet Switch For Internal Networking (LAN) –Fibre Channel For Internal Networked Storage (SAN) Tomorrow's Grid Room –A Unified Architecture Of LAN/WAN/SAN Switching –More Cost Effective –One Network Element vs. Many –One Sphere of Scalability –ALL Resources are GRID Enabled –Layer 3 Switching and Addressing Throughout Source: Steve Wallach, Chiaro Networks

15 The OptIPuter Philosophy A global economy designed to waste transistors, power, and silicon area -and conserve bandwidth above all- is breaking apart and reorganizing itself to waste bandwidth and conserve power, silicon area, and transistors." George Gilder Telecosm (2000) Bandwidth is getting cheaper faster than storage. Storage is getting cheaper faster than computing. Exponentials are crossing.

16 From SuperComputers to SuperNetworks-- Changing the Grid Design Point The TeraGrid is Optimized for Computing –1024 IA-64 Nodes Linux Cluster –Assume 1 GigE per Node = 1 Terabit/s I/O –Grid Optical Connection 4x10Gig Lambdas = 40 Gigabit/s –Optical Connections are Only 4% Bisection Bandwidth The OptIPuter is Optimized for Bandwidth –32 IA-64 Node Linux Cluster –Assume 1 GigE per Processor = 32 gigabit/s I/O –Grid Optical Connection 4x10GigE = 40 Gigabit/s –Optical Connections are Over 100% Bisection Bandwidth

17 Data Intensive Scientific Applications Require Experimental Optical Networks Large Data Challenges in Neuro and Earth Sciences –Each Data Object is 3D and Gigabytes –Data are Generated and Stored in Distributed Archives –Research is Carried Out on Federated Repository Requirements –Computing Requirements PC Clusters –Communications Dedicated Lambdas Over Fiber –Data Large Peer-to-Peer Lambda Attached Storage –Visualization Collaborative Volume Algorithms Response –OptIPuter Research Project

18 The Biomedical Informatics Research Network a Multi-Scale Brain Imaging Federated Repository BIRN Test-beds : BIRN Test-beds : Multiscale Mouse Models of Disease, Human Brain Morphometrics, and FIRST BIRN (10 site project for fMRIs of Schizophrenics) NIH Plans to Expand to Other Organs and Many Laboratories

19 Microscopy Imaging of Neural Tissue Marketta Bobik Francisco Capani & Eric Bushong Confocal image of a sagittal section through rat cortex triple labeled for glial fibrillary acidic protein (blue), neurofilaments (green) and actin (red) Projection of a series of optical sections through a Purkinje neuron revealing both the overall morphology (red) and the dendritic spines (green)

20 Interactive Visual Analysis of Large Datasets -- East Pacific Rise Seafloor Topography Scripps Institution of Oceanography Visualization Center

21 Tidal Wave Threat Analysis Using Lake Tahoe Bathymetry Scripps Institution of Oceanography Visualization Center Graham Kent, SIO

22 SIO Uses the Visualization Center to Teach a Wide Variety of Graduate Classes Geodesy Gravity and Geomagnetism Planetary Physics Radar and Sonar Interferometry Seismology Tectonics Time Series Analysis Multiple Interactive Views of Seismic Epicenter and Topography Databases Deborah Kilb & Frank Vernon, SIO

23 Cluster – Disk Disk – Disk Viz – Disk DB – Cluster Cluster – Cluster OptIPuter LambdaGrid Enabled by Chiaro Networking Router switch Medical Imaging and Microscopy Chemistry, Engineering, Arts San Diego Supercomputer Center Scripps Institution of Oceanography Chiaro Enstara Image Source: Phil Papadopoulos, SDSC

24 ½ Mile The UCSD OptIPuter Deployment SIO SDSC CRCA Phys. Sci - Keck SOM JSOE Preuss 6 th College Phase I, Fall 02 Phase II, 2003 SDSC Annex Collocation point Node M The OptIPuter Experimental UCSD Campus Optical Network Earth Sciences SDSC Arts Chemistry Medicine Engineering High School Undergrad College Phase I, Fall 02 Phase II, 2003 SDSC Annex To CENIC Collocation point Collocation Chiaro Router (Installed Nov 18, 2002) Production Router (Planned) Source: Phil Papadopoulos, SDSC; Greg Hidley, Cal-(IT) 2 Roughly, $0.20 / Strand-Foot UCSD New Cost Sharing Roughly $250k of Dedicated Fiber

25 Planned Chicago Metro Lambda Switching OptIPuter Laboratory Intl GE, 10GE Natl GE, 10GE Metro GE, 10GE 16x1 GE 16x10 GE 16-Processor McKinley at University of Illinois at Chicago 16-Processor Montecito/Chivano at Northwestern StarLight 10x1 GE + 1x10GE Nationals: Illinois, California, Wisconsin, Indiana, Abilene, FedNets. Washington, Pennsylvania… Internationals: Canada, Holland, CERN, GTRN, AmPATH, Asia… Source: Tom DeFanti, UIC

26 OptIPuter Software Research Near-term: Build Software To Support Advancement Of Applications With Traditional Models –High Speed IP Protocol Variations (RBUDP, SABUL, …) –Switch Control Software For DWDM Management And Dynamic Setup –Distributed Configuration Management For OptIPuter Systems Long-Term Goals To Develop: –System Model Which Supports Grid, Single System, And Multi-System Views –Architectures Which Can: –Harness High Speed DWDM –Present To The Applications And Protocols –New Communication Abstractions Which Make Lambda-Based Communication Easily Usable –New Communication & Data Services Which Exploit The Underlying Communication Abstractions –Underlying Data Movement & Management Protocols Supporting These Services –Killer App Drivers And Demonstrations Which Leverage This Capability Into The Wireless Internet Source: Andrew Chien, UCSD

27 OptIPuter System Opportunities Whats The Right View Of The System? Grid View –Federation Of Systems – Autonomously Managed, Separate Security, No Implied Trust Relationships, No Transitive Trust –High Overhead – Administrative And Performance –Web Services And Grid Services View Single System View –More Static Federation Of Systems –A Single Trusted Administrative Control, Implied Trust Relationships, Transitive Trust Relationships –But This Is Not Quite A Closed System Box –High Performance –Securing A Basic System And Its Capabilities –Communication, Data, Operating System Coordination Issues Multi-System View –Can We Create Single System Views Out Of Grid System Views? –Delivering The Performance; Boundaries On Trust Source: Andrew Chien, UCSD

28 OptIPuter Communication Challenges Terminating A Terabit Link In An Application System –--> Not A Router Parallel Termination With Commodity Components –N 10GigE Links -> N Clustered Machines (Low Cost) –Community-Based Communication What Are: –Efficient Protocols to Move Data in Local, Metropolitan, Wide Area? –High Bandwidth, Low Startup –Dedicated Channels, Shared Endpoints –Good Parallel Abstractions For Communication? –Coordinate Management And Use Of Endpoints And Channels –Convenient For Application, Storage System –Secure Models For Single System View –Enabled By Lambda Private Channels –Exploit Flexible Dispersion Of Data And Computation Source: Andrew Chien, UCSD

29 OptIPuter Storage Challenges DWDM Enables Uniform Performance View Of Storage –How To Exploit Capability? –Other Challenges Remain: Security, Coherence, Parallelism –Storage Is a Network Device Grid View: High-Level Storage Federation –GridFTP (Distributed File Sharing) –NAS – File System Protocols –Access-control and Security in Protocol –Performance? Single-System View: Low-Level Storage Federation –Secure Single System View –SAN – Block Level Disk and Controller Protocols –High Performance –Security? Access Control? Secure Distributed Storage: Threshold Cryptography Based Distribution –PASIS Style – Distributed Shared Secrets –Lambdas Minimize Performance Penalty Source: Andrew Chien, UCSD

30 Two Visits Between UICs Jason Leigh and UCSDs NCMIR NCMIR Provided EVL: –EVL Will Prepare Data For Visualization On Tiled Display Systems. –With Large Mosaics And Large Format Tomography Data EVL Has Provided NCMIR with: –ImmersaView (Passive Stereo Wall Software) to Use In SOM Conference Room Passive Stereo Projection System –System Has Been Installed And Is Working –SOM Investigating Use Of Quanta Memory To Memory (UDP Based) Block Data Transfer Protocol For A Number Of Applications EVL and NCMIR Are: –Looking Into Adopting Concepts/Code From Utah's Transfer Function GUIs For Displaying Voxel Visualization On Display Walls –Have Made Plans To Collaborate On The Development Of The Physical Design Of The SOM IBM 9M Pixel Active 3D Display Similar Results for SIO

31 OptIPuter is Exploring Quanta as a High Performance Middleware Quanta is a high performance networking toolkit / API. Reliable Blast UDP: –Assumes you are running over an over-provisioned or dedicated network. –Excellent for photonic networks, dont try this on commodity Internet. –It is FAST! –It is very predictable. –We give you a prediction equation to predict performance. This is useful for the application. –It is most suited for transfering very large payloads. –At higher data rates processor is 100% loaded so dual processors are needed for your application to move data and do useful work at the same time. Source: Jason Leigh, UIC

32 Reliable Blast UDP (RBUDP) At IGrid 2002 all applications which were able to make the most effective use of the 10G link from Chicago to Amsterdam used UDP RBUDP[1], SABUL[2] and Tsunami[3] are all similar protocols that use UDP for bulk data transfer- all of which are based on NETBLT- RFC969 RBUDP has fewer memory copies & a prediction function to let applications know what kind of performance to expect. –[1] J. Leigh, O. Yu, D. Schonfeld, R. Ansari, et al., Adaptive Networking for Tele-Immersion, Proc. Immersive Projection Technology/Eurographics Virtual Environments Workshop (IPT/EGVE), May 16-18, Stuttgart, Germany, 2001. –[2] Sivakumar Harinath, Data Management Support for Distributed Data Mining of Large Datasets over High Speed Wide Area Networks, PhD thesis, University of Illinois at Chicago, 2002. –[3] Source: Jason Leigh, UIC

33 5x3 Grid of 1280x1024 Pixel LCD Panels Driven by 16-PC Cluster Resolution=6400x3072 Pixels, or ~3000x1500 pixels in Autostereo Visualization at Near Photographic Resolution The OptIPanel Version I Source: Tom DeFanti, EVL--UIC

34 NTT Super High Definition Video (NTT 4Kx2K=8 Megapixels) Over Internet2 Starlight in Chicago USC In Los Angeles SHD = 4xHDTV = 16xDVD Applications: Astronomy Mathematics Entertainment

35 The Continuum at EVL and TRECC OptIPuter Amplified Work Environment Passive stereo displayAccessGridDigital white board Tiled display Source: Tom DeFanti, Electronic Visualization Lab, UIC

36 GeoWall at the American Geophysical Union Dec 2003 Source: John Orcutt, SIO, President AGU

37 Fast polygon and volume rendering with stereographics GeoWall Earth Science GeoFusion GeoMatrix Toolkit Underground Earth Science Rob Mellors and Eric Frost, SDSU SDSC Volume Explorer Dave Nadeau, SDSC, BIRN SDSC Volume Explorer Neuroscience Anatomy Visible Human Project NLM, Brooks AFB, SDSC Volume Explorer 3D APPLICATIONS: + = OptIPuter Transforms Individual Laboratory Visualization, Computation, & Analysis Facilities The Preuss School UCSD OptIPuter Facility

38 I-Light Indianas Fiber Optic Initiative First University Owned and Operated State Fiber Infrastructure –Indiana University Bloomington –Indiana University Purdue University Indianapolis (IUPUI) –Purdue Universitys West Lafayette Campus –Internet2 Network Operations Center Funded with a $5.3 Million State Appropriation 1999 I-Light Network Launched on December 11, 2001 Currently 1 and 10 GigE Lambdas

39 I-Light Campus Commodity Internet Usage Is Approaching 1 Gbps Purdue IU IHETS/ITN Outbound Inbound

40 A Representation Of The Growth In Theoretical Capacity of The Connection Between IUB And IUPUI Assumes All Fibers Lit Using Advanced DWDM Running Multiple 10Gbps Lambda On Each Fiber Total Bandwidth as of January 2002 Owning Fiber Allows for Large Multi-Year Bandwidth Capacity Growth Source: Indiana University

41 Fifteen Countries/Locations Proposing 28 Demonstrations: Canada, CERN, France, Germany, Greece, Italy, Japan, The Netherlands, Singapore, Spain, Sweden, Taiwan, United Kingdom, United States Applications Demonstrated: Art, Bioinformatics, Chemistry, Cosmology, Cultural Heritage, Education, High-Definition Media Streaming, Manufacturing, Medicine, Neuroscience, Physics, Tele-science Grid Technologies: Grid Middleware, Data Management/ Replication Grids, Visualization Grids, Computational Grids, Access Grids, Grid Portal iGrid 2002 September 24-26, 2002, Amsterdam, The Netherlands UIC Sponsors: HP, IBM, Cisco, Philips, Level (3), Glimmerglass, etc.

42 iGrid 2002 Was Sustaining 1-3 Gigabits/s Total Available Bandwidth Between Chicago and Amsterdam Was 30 Gigabit/s

43 Providing a 21 st Century Internet Grid Infrastructure Tightly Coupled Optically-Connected OptIPuter Core Wireless Sensor Nets, Personal Communicators Loosely Coupled Peer-to-Peer Computing & Storage Routers

Download ppt "Year End Report on the NSF OptIPuter ITR Project NSF ANIR Division Arlington, VA December 12, 2002 Dr. Larry Smarr Director, California Institute for Telecommunications."

Similar presentations

Ads by Google