CASPUR SAN News Andrei Maslennikov Orsay, April 2001.

Slides:



Advertisements
Similar presentations
Tivoli Storage Network Manager Sales Presentation
Advertisements

Data Storage Solutions Module 1.2. Data Storage Solutions Upon completion of this module, you will be able to: List the common storage media and solutions.
Proposed Storage Area Network Facilities For Discussion.
Introduction to Storage Area Network (SAN) Jie Feng Winter 2001.
Storage Area Network SAN
Copyright © 2014 EMC Corporation. All Rights Reserved. Linux Host Installation and Integration for Block Upon completion of this module, you should be.
SAN Last Update Copyright Kenneth M. Chipps Ph.D. 1.
Copyright © 2014 EMC Corporation. All Rights Reserved. ESXi Host Installation and Integration for Block Upon completion of this module, you should be able.
Silicon Graphics, Inc. Cracow ‘03 Grid Workshop SAN over WAN - a new way of solving the GRID data access bottleneck Dr. Wolfgang Mertz Business Development.
Smart Storage and Linux An EMC Perspective Ric Wheeler
Vorlesung Speichernetzwerke Teil 2 Dipl. – Ing. (BA) Ingo Fuchs 2003.
Storage area Network(SANs) Topics of presentation
SQL Server, Storage And You Part 2: SAN, NAS and IP Storage.
1 Recap (RAID and Storage Architectures). 2 RAID To increase the availability and the performance (bandwidth) of a storage system, instead of a single.
COEN 180 NAS / SAN. NAS Network Attached Storage (NAS) Each storage device has its own network interface. Filers: storage device that interfaces at the.
Storage Area Network (SAN)
Storage Networking Technologies and Virtualization Section 2 DAS and Introduction to SCSI1.
COEN 180 NAS / SAN. Storage Trends Storage Trends: Money is spend on administration Morris, Truskowski: The evolution of storage systems, IBM Systems.
Trends in Storage Subsystem Technologies Michael Joyce, Senior Director Mylex & OEM Storage Subsystems IBM.
Development of an API Standard in Interoperable Storage Networking Benjamin F. Kuo Troika Networks, Inc.
National Energy Research Scientific Computing Center (NERSC) The GUPFS Project at NERSC GUPFS Team NERSC Center Division, LBNL November 2003.
BACKUP/MASTER: Immediate Relief with Disk Backup Presented by W. Curtis Preston VP, Service Development GlassHouse Technologies, Inc.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
GDC Workshop Session 1 - Storage 2003/11. Agenda NAS Quick installation (15 min) Major functions demo (30 min) System recovery (10 min) Disassembly (20.
Storage Survey and Recent Acquisition at LAL Michel Jouvin LAL / IN2P3
© 2006 EQUALLOGIC, INC. │ ALL RIGHTS RESERVED 1 The Marriage of Virtual Systems with Virtual Storage.
Managing Storage Lesson 3.
Module 10 Configuring and Managing Storage Technologies.
GeoVision Solutions Storage Management & Backup. ๏ RAID - Redundant Array of Independent (or Inexpensive) Disks ๏ Combines multiple disk drives into a.
Baydel Founded in 1972 Headquarters: Surrey, England North American Headquarters: San Jose, CA Engineering Driven Organization Specialize in Computer Storage.
Operating in a SAN Environment March 19, 2002 Chuck Kinne AT&T Labs Technology Consultant.
Storage for Opteron Server Sun Microsystems Inc.
Introducing Snap Server™ 700i Series. 2 Introducing the Snap Server 700i series Hardware −iSCSI storage appliances with mid-market features −1U 19” rack-mount.
Best Practices for Backup in SAN/NAS Environments Jeff Wells.
Module 9: Configuring Storage
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Catania, April 2001.
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Orsay, April 2001.
Storage Systems Market Analysis Dec 04. Storage Market & Technologies.
SANRAD Company Presentation IP Storage Networking: Access, Share, Manage 1 Bin Lin Technical Support of Sanrad China
 International Computers Limited, 2001 AXiS 20/3/2001 AXiS Trimetra User Group Glenn Fitzgerald Manager, Storage Solutions ICL High Performance Systems.
CASPUR Site Report Andrei Maslennikov Lead - Systems Karlsruhe, May 2005.
November 2, 2000HEPiX/HEPNT FERMI SAN Effort Lisa Giacchetti Ray Pasetes GFS information contributed by Jim Annis.
1 U.S. Department of the Interior U.S. Geological Survey Contractor for the USGS at the EROS Data Center EDC CR1 Storage Architecture August 2003 Ken Gacke.
Copyright © 2014 EMC Corporation. All Rights Reserved. Windows Host Installation and Integration for Block Upon completion of this module, you should be.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
CASPUR Storage Lab Andrei Maslennikov CASPUR Consortium Catania, April 2002.
IST Storage & Backup Group 2011 Jack Shnell Supervisor Joe Silva Senior Storage Administrator Dennis Leong.
Clustering In A SAN For High Availability Steve Dalton, President and CEO Gadzoox Networks September 2002.
Hosted by Minimizing the Impact of Storage on Your Network W. Curtis Preston President The Storage Group.
Disk Interfaces Last Update Copyright Kenneth M. Chipps Ph.D. 1.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
CASPUR Site Report Andrei Maslennikov Group Leader - Systems RAL, April 1999.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Reliability of KLOE Computing Paolo Santangelo for the KLOE Collaboration INFN LNF Commissione Scientifica Nazionale 1 Roma, 13 Ottobre 2003.
KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association STEINBUCH CENTRE FOR COMPUTING - SCC
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
The Fibre Channel Project at the CC_IN2P3 Hector DUQUE, Rolf RUMLER - April Hector DUQUE, Rolf RUMLER
PIC port d’informació científica Luis Diaz (PIC) ‏ Databases services at PIC: review and plans.
July 30, 2009opsarea meeting, IETF Stockholm1 Operational Deployment and Management of Storage over the Internet David L. Black, EMC IETF opsarea meeting.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
XenData SX-10 LTO Archive Appliance
Ryan Leonard Storage and Solutions Architect
Introduction to Networks
Storage Networks and Storage Devices
Storage Area Network SAN
STATEL an easy way to transfer data
Presentation transcript:

CASPUR SAN News Andrei Maslennikov Orsay, April 2001

A.Maslennikov - Orsay Will be discussed: Goals Fabric Bridges Adding a new device to fabric Our devices Distributed Tapes Some plans

A.Maslennikov - Orsay Goals Distributed Tape Drives - Modern tape drives are rather expensive - To get a good ROI drives should be shared at no loss of performance Stay along with technology that has future - Speeds will grow to 200 MB/sec and to 400 MB/sec - Switches will become interconnectable over WAN with ATM/STM Inteworking Units Ability to relocate devices et ease - Fibre Channel SAN allows for congestion-free, high-speed connectivity with guaranteed delivery - Since a device is attached to SAN, it may preserve it’s physical location - Devices may be remotely reassigned from one host to another - NB: we only want this for our very central and institutional services

A.Maslennikov - Orsay Fabric - We use Brocade switches 2400 and Currently 2x2400 and 2x2800, 48 ports in total - Brocade Switch: o supports practically all FC COSes and peripherals o performant: up to 233 MB/sec o runs a user-friendly Fabric OS and graphical interface o whole SAN can be administered from any switch o allows for zoning o allows for redundant path o duffused to such extent that device vendors consider Brocade connectivity a must - Only dim point - OS upgrades o Switch reboot is required o Way out: spare ports on other switches and scheduled shutdowns o But this is true for any make of switch….

A.Maslennikov - Orsay Brocade zone 1 Brocade zone 2 HBAs Host 1 Host 2 Tape Disk1 Disk2 Host-Based Adapters - We tried Emulex Light Pulse 8000 and Qlogic QLA Both models are fully supported on Solaris, AIX, Linux, Wxx - We however have discarded QLA2200 at the initial stage, as we observed inteferences between two Solaris hosts connected to the fabric with this HBA and sharing a tape drive: In this case, a job on host 1 was running a heavy test I/O job on disk 1. And reboot of host 2 resulted in a SCSI timeout error on host 1/disk 1. - In case of LP8000, this inteference was not seen. So we “jumped the Emulex train” and use them since on all our hosts. We will also be trying QLA2200 again soon, as their driver was long corrected.

A.Maslennikov - Orsay Bridges - We use 2 Crossroads StorageRouters 4200 to connect SCSI STK 9840 and DLT7000 tape drives to the fabric Features: o supports 2 SCSI buses (diff or single-ended) with up to 15 devices on each o GBIC slot o automatic configuration - Problem observed: o We were able to smoothly share DLT over Crossroads box 1 between 6 hosts o 9840 on Crossroads box 2 was shareable up to 3 hosts. Adding host 4 meant to jeopardize access to tape on all 4 machines… (tape off-line). o Crossroads was blaming Emulex and vice versa o Solution: two boxes had different firmware. Older firmware worked, newer -not

A.Maslennikov - Orsay New device, step-by-step - Make sure the fabric login occured - Find out the device WWNN / WWPN - Create a device alias (nickname) - Create/modify zones normally create one new zone for each HBA (give it an alias!) populate each zone with one HBA and disk and tape aliases do not put a disk device into two different zones, unless you know what you are doing no limitations for tapes - share them! - Set up the persistent bindings in the HBA driver configuration file - Create new devices (vary with OS)

A.Maslennikov - Orsay Our current devices Hosts - 5 IBM AIX ML8 (native and IBM-modified LP8000s) - 6 Solaris 7+/8+ (native LP8000s) Tapes - 1 STK DLT 7000 via Crossroads - 4 LTO on native FC - All tapes are shared Disks - 3 Artecon Lynx II arrays (single ctl) - 2 DotHill 4200 arrays (dual AA) - via 2 Gadzoox Bitstrip TW Hubs - 1 DotHill 7100 (dual AA) - 1 IBM 2102 (2 ctl) - Around 4 TB in total

A.Maslennikov - Orsay FC Library Tape Mounter Tape Dispatcher Distributed Tapes 9740 STK Library 9740 Mount comands via serial line 3584 mount commands via FC S A N hosts LTOs Mount request / Free tape via LAN 9840 DLT 4200 bridge scsi (1) Mount request > (2) Lock wait < (3) Mount command < (4) Mount rc (5) Access Tape (6) Free Tape >

A.Maslennikov - Orsay Some plans - Bring tapes to Linux DB hosts (backup) try both LP8000 and QLA2200 on RedHat - Try small (logical) slices of RAID-5 and RAID-0 non-shared scratch areas faster cache areas for AFS on front-end hosts (AIX and Tru64) system disks? - GPFS tests on SP3 with 4 DotHill 7200 systems on 4 nodes

A.Maslennikov - Orsay Projects for year 2001 Control and Monitoring - agent up and running on all Linux hosts - being ported to other architectures (encryption) - server integration with Syscontrol DB (event logs and configuration) Syscontrol DB - mysql now, migration to InterBase by the end of Hosts’ DB and Syslog event collector DB - Hooks for syscontrol applications Problem management - currently study possible solutions, Razor is one of the options Console Server - planned for the second half of currently look at the serial hardware Security - accent on host-based - host security “index” is being developed to integrate with Syscontrol