Presentation is loading. Please wait.

Presentation is loading. Please wait.

OptIPuter Larry Smarr, PI Tom DeFanti, Jason Leigh, Mark Ellisman, Phil Papadopoulos, Co-PIs Maxine Brown, Project Manager.

Similar presentations


Presentation on theme: "OptIPuter Larry Smarr, PI Tom DeFanti, Jason Leigh, Mark Ellisman, Phil Papadopoulos, Co-PIs Maxine Brown, Project Manager."— Presentation transcript:

1 OptIPuter Larry Smarr, PI Tom DeFanti, Jason Leigh, Mark Ellisman, Phil Papadopoulos, Co-PIs Maxine Brown, Project Manager

2 Knowing the User’s Bandwidth Requirements DSLGigE LAN C A B A -> Need full Internet routing B -> Need VPN services on/and full Internet routing C -> Need very fat pipes, limited multiple Virtual Organizations Source: Cees de Laat, UvA Number of users Bandwidth consumed

3 The OptIPuter is a Distributed “Infostructure” for Data-Intensive Scientific Research and Collaboration “A global economy designed to waste transistors, power, and silicon area – and conserve bandwidth above all – is breaking apart and reorganizing itself to waste bandwidth and conserve power, silicon area, and transistors.” — George Gilder, Telecosm (2000) The OptIPuter: A Philosophy Bandwidth is getting cheaper, faster than storage. Storage is getting cheaper, faster than computing. The exponentials are crossing – we are moving from a processor- centric world, to one centered on optical bandwidth, where the networks will be faster than the computational resources they connect. The OptIPuter: A Paradigm Supercomputers maximize computing, and minimize bandwidth use. The OptIPuter maximizes bandwidth use: A 32-node IA-64 Linux cluster assumes 1GigE per node (32 Gbps I/O). When clusters are optically connected at 40 Gbps (for example), bandwidth is >100% the throughput!

4 A Useful and Usable Tool for Data-Intensive Application Drivers: BioScience and GeoScience The OptIPuter project has two application drivers where scientists are generating multi-gigabytes of 3D volumetric data objects that reside on distributed archives that they want to correlate, analyze and visualize. NIH Biomedical Informatics Research Network Initially a multi-scale brain imaging federated repository, but to be expanded to other organs of the body. NSF EarthScope The acquisition, processing and scientific interpretation of satellite-derived remote sensing, near-real-time environmental, and active source data. http://ncmir.ucsd.edu/gallery.html siovizcenter.ucsd.edu/library/gallery/shoot1/index.shtml

5 A Radical New Architecture for Scientific Cyber “Infostructure” The OptIPuter is a “virtual” parallel computer in which the individual “processors” are widely distributed clusters; the “memory” is in the form of large distributed data repositories; “peripherals” are very-large scientific instruments, visualization displays and/or sensor arrays; and the “motherboard” uses standard IP delivered over multiple dedicated lambdas. Think of the OptIPuter as a giant graphics card, connected to a giant disk system, via a system bus that happens to be an extremely high-speed optical network. One major design goal is to provide scientists with advanced interactive querying and visualization tools, to enable them to explore massive amounts of previously uncorrelated data in near real time. Computing ● Data ● Visualization ● Networking

6 A 21 st Century Amplified Work Environment: The Continuum Passive stereo displayAccessGridDigital white board Tiled display

7 Cluster Visualization

8 San Diego OptIPuter Design: LambdaGrid Enabled by Chiaro Router (Spring 2003) switch Medical Imaging and Microscopy Chemistry, Engineering, Arts San Diego Supercomputer Center Scripps Institution of Oceanography Chiaro Enstara

9

10 What is a Lambda? A lambda, in networking, is a fully dedicated wavelength of light in an optical network, typically used today for 1-10Gbps. Lambdas are circuit-based technology, but can carry packet-based information. We are now mostly working with 1Gb dedicated layer2 circuits that act like lambdas

11 Switching Lambdas make the OptIPuter Possible Visualization getting very good on PCs Networking getting very fast on PCs (10GigE) Disk getting cheaper and faster PCIX bus, 64-bit architectures available Cost of bandwidth is plummeting Optical switching is coming We don’t have to scale to reach everybody in their homes

12 Why Optical Switching? No need to look at every packet when transferring a terabyte of information –1% the cost of routing –10% the cost of switching –64x64 10Gb: $100,000 O-O-O switched $1,000,000 O-E-O switched $10,000,000 O-E-O Routed Spend the savings on computing and collaboration systems instead! Replaces patch panels; allows rapid reconfiguration of 1 and 10Gb experiments

13 Large-Scale International Application Development Guaranteed Latency Guaranteed Scheduling Guaranteed Bandwidth Access Grid USA, Canada, The Netherlands, UK, Italy, Germany, Russia, Australia, China, Korea, Thailand, Taiwan, Japan, Brazil ● BABAR USA and internationally ● The D0 Experiment USA, CERN, Germany, France, Japan and other worldwide collaborators ● GiDVN: Global Internet Digital Video Network CCIRN DVWG worldwide membership ●● Hubble Space Telescope USA, France and others worldwide ● SC Global USA and InternationalInternational ● Sloan Digital Sky Survey (SDSS) USA, France and worldwide ● Virtual Room Videoconferencing System (VRVS) CERN, Switzerland; Caltech, USA; Others ● vlbiGrid USA, The Netherlands, Finland, UK and worldwide ●

14 Large-Scale International Middleware and Toolkit Development Guaranteed Latency Guaranteed Scheduling Guaranteed Bandwidth EU DataGrid CERN, France, Italy, The Netherlands, UK, Czech Republic, Finland, Germany, Hungary, Spain, Sweden (in cooperation with US grid projects, notably GriPhyN, PPDG and iVGL ●●● EU DataTAG USA and Europe ● Globally Interconnected Object Databases (GIOD) USAUSA and CERN ● Globus USA, Sweden, others internationally ●●● MONARC for LHC Experiments CERN, Switzerland; Caltech, USA; Others ● UK e-Science Programme UK and USA ●●●

15 StarLight : Perhaps the World’s Largest 1GigE and 10GigE Exchange Abbott Hall, Northwestern University’s Chicago downtown campus StarLight is an experimental optical infrastructure and proving ground for network services optimized for high- performance applications. Operational since summer 2001, StarLight is a 1GigE and 10GigE switch/router facility for high-performance access to participating networks and is becoming a true optical switching facility for wavelengths.

16 StarLight’s Dutch and Canadian Partners Kees Neggers, SURFnetBill St. Arnaud, CA*net4

17

18 Hard Problems Internet is not designed for single large- scale users—TCP is not usable for long fat applications Circuits are not “scalable” All intelligence has to be on the edge Tuning compute, data, visualization, networking using clusters to get order of magnitude improvement Security at 10Gb line speed

19 The OptIPuter Project will Investigate, procure and install optical and electronic switches. Explore protocol stack development (for VLAN/optical management, control and data planes). Make progress in removing hierarchical protocol layers, moving toward network transparency. Optimize network-attached computational PC clusters for large- scale data mining and visualization. Facilitate measuring and monitoring 1-10Gb networks in experimental and production configurations. Compare strategies for grids moving large data over moderately congested links vs. dedicated links. Investigate the dynamic provisioning of light paths by high- performance applications on optical networks. Explore mechanisms to communicate and deliver Class of Service in high-speed networks. Debate the security advantages of switched lambdas

20 Questions?


Download ppt "OptIPuter Larry Smarr, PI Tom DeFanti, Jason Leigh, Mark Ellisman, Phil Papadopoulos, Co-PIs Maxine Brown, Project Manager."

Similar presentations


Ads by Google