Visible Human Networking Update Thomas Hacker June 3, 2000.

Slides:



Advertisements
Similar presentations
Hardware & the Machine room Week 5 – Lecture 1. What is behind the wall plug for your workstation? Today we will look at the platform on which our Information.
Advertisements

MUNIS Platform Migration Project WELCOME. Agenda Introductions Tyler Cloud Overview Munis New Features Questions.
Beowulf Supercomputer System Lee, Jung won CS843.
1 © 2002, Cisco Systems, Inc. All rights reserved. Session Number Presentation_ID Packet® Icon Library Current as of February 14, 2002.
1 © 2002, Cisco Systems, Inc. All rights reserved. Session Number Presentation_ID Packet® Icon Library Current as of June 16, 2004.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
ATLAS Tier 2 Status (IU/BU) J. Shank Boston University iVDGL Facilities Workshop (March 20-22, 2002) BNL.
Designing and Installing a Network Peer to Peer or Server –Number of workstations vs. Cost –Administration, distributed or centralized –Security considerations.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
PetaByte Storage Facility at RHIC Razvan Popescu - Brookhaven National Laboratory.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
UNIVERSITY of MARYLAND GLOBAL LAND COVER FACILITY High Performance Computing in Support of Geospatial Information Discovery and Mining Joseph JaJa Institute.
Networking, Hardware Issues, SQL Server and Terminal Services Session VII.
© 2001 by Prentice Hall5-1 Local Area Networks, 3rd Edition David A. Stamper Part 2: Hardware Chapter 5 LAN Hardware.
Baydel Founded in 1972 Headquarters: Surrey, England North American Headquarters: San Jose, CA Engineering Driven Organization Specialize in Computer Storage.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Catania, April 2001.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
Ohio Supercomputer Center Cluster Computing Overview Summer Institute for Advanced Computing August 22, 2000 Doug Johnson, OSC.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Infrastructure in Teleradiology CONTENT 1. Introduction 2. Overview of Data Communication 3. Local Area Network 4. Wide Area Network 5. Emerging Technology.
Tier 3 and Computing Delhi Satyaki Bhattacharya, Kirti Ranjan CDRST, University of Delhi.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
1 U.S. Department of the Interior U.S. Geological Survey Contractor for the USGS at the EROS Data Center EDC CR1 Storage Architecture August 2003 Ken Gacke.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Computer Systems Lab The University of Wisconsin - Madison Department of Computer Sciences Linux Clusters David Thompson
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
The II SAS Testbed Site Jan Astalos - Institute of Informatics Slovak Academy of Sciences.
HPSS for Archival Storage Tom Sherwin Storage Group Leader, SDSC
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility (formerly CEBAF - The Continuous Electron Beam Accelerator Facility)
NATIONAL PARTNERSHIP FOR ADVANCED COMPUTATIONAL INFRASTRUCTURE Capability Computing – High-End Resources Wayne Pfeiffer Deputy Director NPACI & SDSC NPACI.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
CASPUR Site Report Andrei Maslennikov Group Leader - Systems RAL, April 1999.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Macromolecular Crystallography Workshop 2004 Recent developments regarding our Computer Environment, Remote Access and Backup Options.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
F. HemmerUltraNet® Experiences SHIFT Model CPU Server CPU Server CPU Server CPU Server CPU Server CPU Server Disk Server Disk Server Tape Server Tape Server.
Portuguese Grid Infrastruture(s) Gonçalo Borges Jornadas LIP 2010 Braga, Janeiro 2010.
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
12/19/01MODIS Science Team Meeting1 MODAPS Status and Plans Edward Masuoka, Code 922 MODIS Science Data Support Team NASA’s Goddard Space Flight Center.
Berkeley Cluster Projects
NL Service Challenge Plans
Local Area Networks, 3rd Edition David A. Stamper
PC Farms & Central Data Recording
Cluster Active Archive
Manchester HEP group Network, Servers, Desktop, Laptops, and What Sabah Has Been Doing Sabah Salih.
Web Server Administration
Chapters 1-3 Concepts NT Server Capabilities
Lee Lueking D0RACE January 17, 2002
Cost Effective Network Storage Solutions
Packet® Icon Library Current as of February 14, 2002.
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Cluster Computers.
Presentation transcript:

Visible Human Networking Update Thomas Hacker June 3, 2000

Outline Latest News U-M Network Connectivity Existing CPC Network Topology Proposed CPC Network Topology Performance Improvement Program NPACI Storage Resource Broker Performance Goals

Network Status In the works … –CPC Upgrade from OC-3 155Mbps to OC Mbps –CPC Router Purchase OC-12 line installed from Media Union to Arbor Lakes, needs some tuning to improve performance

VBNS UM BIN FDDI ITD FDDI CAEN DOE NSI DREN Abilene X MERIT BIN OCR MCIT Mich Giga PoP OC Mb/sec gigabit ethernet CPC X 5 OC-12 FW m 17m Mich NET GW Quin Cleveland POP VBNS/Abilene Research Commercial/Net FW University of Michigan Network Topology Firewalls

2 Processor 2 GB Mem New ADSM/Tivoli Mass Storage Sever IBM F-50 NPACI AFS Storage Server IBM F-50 Existing CPC High Performance Network Topology 5/2000 CPC IBM SP2 Complex (48) 66 MHz Nodes Switchless (Training Nodes) (48) 166 MHz Nodes (Production 1 GB Mem Nodes) 4 Processor 2 GB Mem Network Performance Measurement Station IBM F-50 4 Processor 2 GB Multi-Resident/ Resident AFS Mass Storage Server IBM F-50 2 Processor 2 GB Mem New Nodes SP2 Control Workstation IBM F-50 2 Processor 1 GB Mem New ADSM/Tivoli Robotic Tape Controller IBM F-50 SGI 12 GB, 16 Processors SGI Origin 2000 Switched 100 BaseT Lines 48 Switched 10 BaseT Lines 4 EtherChannel Bonded Switched 100 BaseT Lines Switched 10 BaseT Existing ADSM Mass Storage Server 1 Processor 1 GB Mem New ADSM/Tivoli Robotic Tape Controller IBM F-50 SGI Powerchallenge 4 GB, 16 Processors IBM NPACI Globus Research Server Existing ADSM Robotic Tape Controller 1 Processor 512 Mb Mem CPC-CISCO 5500 mr CPC CISCO 5500 Ethernet Switch UMMU-cis LINUX Cluster Firewall CPC BEOWULF LINUX Cluster 48 port 10BaseT Hub BaseT Lines 32 Switched 100BaseT Lines SP2 Control Workstation SP2 Build/ Submit Node UMMU-CIS 1010 Media Union ATM Switch CISCO Lightstream 1010 Chry-CIS 1010 Chrysler ATM Switch CISCO Lightstream 1010 ATM OC Mb/sec OC-12 To Abilene 622 Mb/sec Switched 10 BaseT 2 Ether-Channel Bonded Switched 100 BaseT Lines Switched 100 BaseT Line CISCO 2900XL 10/100 Switch 48 Switched 100 BaseT Lines 100 Mb/sec FDDI OC-3 ATM 155 Mb/Sec Switched 100 BaseT CISCO CDDI/FDDI Concentrator 100 Mb/sec FDDI 144 GB Local Disk OC-3 ATM 155 Mb/Sec Dual Ppro 200 MHz 128 Mb RAM per node 2 Processor 2 GB Mem New Tivoli/ADSM Backend Server + Engineering Research Host Backup IBM H-50 (64) 166 MHz Nodes (SDSC) June 2000 (16) 166 MHz Nodes (SDSC) June 2000 Switched 100 BaseT IBM GB CD-R Juke Box FWD SCSI DIFF SCSI STK Timberwolf TB 216 GB Dual Channel RAID 1 FWD SCSI 100 GB ADSM Data Vole LVD SCSI FWD SCSI CPC Beowulf Complex 100 Mb/sec FDDI UMMU-CIS GB RAID 1 LVD SCSI LVD SCS I 200 GB Backup/Archive

4 Processor 2 GB Mem Network Performance Measurement Station IBM F-50 Proposed CPC High Performance Network Topology 2 Processor 2 GB Mem New Tivoli/ADSM Backend Server + Engineering Research Host Backup IBM H-50 OC-48 OC-12 Gigabit Ethernet CISCO 6509 Router ADSM/Tivoli Mass Storage System (42 TB) 4 Processor 2 GB NPACI AFS Server IBM F-50 ADSM/Tivoli Robotic Tape Controller 2 Processor 1 GB IBM F-50 Media Union ATM Router OC Mb/sec Engineering ATM Cloud - or - To ITD- Cooley ATM Switch Arbor Lakes ATM Switch Abilene 1000 Mb/sec Gigabit Ethernet SGI Origin EtherChannel Bonded 100 Mb/sec Switched Ethernet 1000 Mb/sec Gigabit Ethernet UMVH SGI OCTANE Webserver OC Mb/sec OC Mb/sec 1000 Mb/sec Gigabit Ethernet Media Union SGI Onyx2 1,000 Mb/sec Gigabit Ethernet 1,000 Mb/sec Gigabit Ethernet 1,000 Mb/sec Gigabit Ethernet 1000 Mb/sec Gigabit Ethernet CPC IBM SP2 Complex (48) 66 MHz Nodes Switchless (Training Nodes) (48) 166 MHz Nodes (Production 1GB Mem Nodes) (64) 166 MHz Nodes (SDSC) June 2000 Switched 10 BaseT IBM F-50 SP2 Control Workstation SP2 Build/ Submit Node Switched 10 BaseT FWD SCSI Storage Technology Timberwolf 9710 DLT Tape Silo 41 TB 1,000 Mb/sec Gigabit Ethernet 1000 Mb/sec Gigabit Ethernet Nodes (16) 166 MHz Nodes (SDSC) June 2000 Switched 100 BaseT 588 DLT7000 Tape Cartridges 6 DLT7000 Tape Drives 36GB AFS Disk FW SCSI SGI 4 GB, 16 Processors SGI Origin 2000 SGI Power Challenge Server SGI 12 GB, 16 Processors SGI Origin 2000 FWD SCSI 216 GB Dual Channel RAID GB ADSM Data Vol LVD SCSI Dual Ppro 200 Mhz 128 Mb RAM per node LINUX Cluster Firewall CPC BEOWULF LINUX Cluster 48 port 10 baseT Hub CISCO 2900XL 10/100 Switch CPC Beowulf Complex 32 Switched 100BaseT Lines UMVH Production Prototype Compaq Cluster 48 Switched 10BaseT 48 Switched 100BaseT CISCO 5500 Ethernet Switch Switched 100BaseT Lines 1000 Mb/sec Gigabit Ethernet 2 Processor 2 GB Mem SDSC Nodes SP2 Control Workstation IBM F-50 4 Processor 2 GB Multi-Resident/ Resident AFS Mass Storage Server IBM F-50 2 EtherChannel Bonded 100 Mb/sec Switched Ethernet 1000 Mb/sec Gigabit Ethernet 1440 GB Sun A1000 Disk FWD SCSI 200 GB Backup/Archive LVD SCSI 144 GB RAID 1 LVD SCSI

High Performance Networking Upgrade Program Application:Training, ImBench performance testing System: Network: NPACI Storage Resource Broker/ Reagan Moore – Data Intensive Computing Environment Thrust Area University of Texas/University of Michigan SP2 system integration collaboration AFS file system, joint job scheduling, Engineering Materials Database sharing Multi-resident AFS wide area mass storage file systems (AFS server that utilizes mass storage system) Job scheduling/moving large multi-gigabyte files between NPACI and National HPC sites/Grid computing infrastructure (Globus, Legion, Linux Clusters) NPACI thrust areas: Engineering/Education, Outreach and Training/ ISR-Social Science Visible Human/NPACI Neurology Collaboration/ Brian U-M; Human Cochlear Modeling, Ed U-M; Mark UCSD, Platform Applications Network performance testing & tuning Move from 10/100 G.E./ATM Move from OC-3 OC-12 Current Applications That Will Immediately Benefit From Upgrade Program Approach Upgrade physical infrastructure Optimize facilities and operations use by performance tuning & host network adapter upgrade User training & consulting to optimize application utilization of network (Matt Mathis–PSC/NLAR TCP testrig)

1st Read Application Data Transfer DataTransfer.ppt File System Disk B/W 1st File System Read 2nd File System Read bcopy Memory Read B/W Application Network Interface Card TCP/IP B/W Time for Application to Process Data Bi-Directional B/W Route Stability “Bi-Directional” Latency (RTT) Path Length (Hops) Internet Characteristics Application Network Interface Card Network Socket TCP/IP Write Virtual Memory B/W Write File System BW File System Dependent on File System “dirty page” Policy Cached File System Read B/W Router Time for Application to Process Data Link Characteristics – Link B/W – Link RTT – Link Packet Loss Virtual Memory Measure TCP/IP Loopback Network B/W Measure TCP/IP Loopback Network B/W

Networking Goals Get new router and tuned OC-12 connection in place Move critical servers and HPC systems to gigabit ethernet Develop system, network and internet performance tuning toolset and expertise to optimize use of resources