Presentation is loading. Please wait.

Presentation is loading. Please wait.

Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison

Similar presentations


Presentation on theme: "Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison"— Presentation transcript:

1 Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison brian.allison1@compaq.com

2 2 Discussion Topics FC basics HBS & DRM SANs Fibre Channel Tape Support SCSI/FibreChannel Fastpath FC 2001 Plans FC Futures

3 3 Fibre Channel ANSI standard network and storage interconnect –OpenVMS, and most others, use it for SCSI storage 1.06 gigabit/sec., full-duplex, serial interconnect –2gb in late 2001… 10gb over the next several years Long distance –OpenVMS supports 500M multi-mode fiber and 100 KM single-mode fiber –Longer distances with inter-switch ATM links, if DRM is used Large scale –Switches provide connectivity and bandwidth aggregation, to support hundreds of nodes

4 4 Topologies Arbitrated loop FC-AL (NT/UNIX today) –Uses Hubs (or new switch hubs) –Max. Number of nodes is fixed at 126 –Shared bandwidth Switched (SAN - VMS / UNIX / NT) –Highly scalable –Multiple concurrent communications –Switch can connect other interconnect types

5 5 Fibre Channel Link Technologies Multi-mode fiber –62.5 micron, 200 M –50 micron, 500 M (widely used) Single-mode fiber for Inter-Switch Links (ISLs) –9 micron, 100 KM DRM supports ISL gateway –T1/E1, T3/E3 or ATM/OC3 DRM also supports ISLs with Wave Division Multiplexors (WDM) and Dense Wave Division Multiplexing (DWDM)

6 6 Current Configurations Up to twenty switches (8 or 16-port) per FC fabric AlphaServer 800, 1000A*, 1200, 4100, 4000, 8200, 8400, DS10, DS20, DS20E, ES40, GS60, GS80, GS140, GS160 & GS320 Adapters (max) per host determined by the platform type: 2, 4, 8, 26 Multipath support - no single point of failure 100km max length * The AS1000A does not have console support for FC.

7 7 Long-Distance Storage Interconnect FC is the first long-distance storage interconnect –New possibilities for disaster tolerance Host-based Volume Shadowing Data Replication Manager (DRM)

8 8 A Multi-site FC Cluster FC Switch HSG FC host HSG FC host FC Switch Host-to-Host cluster communication 100KM max

9 9 FDDI T3 ATM CI, DSSI, MC, FDDI Gigabit Ethernet HBVS: Multi-site FC Clusters (Q4 2000) FC Switch HSG Alpha HSG Alpha FC Switch FC (100 KM) FC Switch host based shadow set = GigaSwitch

10 10 HBVS Multi-site FC Pro and Con Pro –High performance, low latency –Symmetric access –Fast failover Con –ATM bridges not supported until some time in late 2001 –Full shadow copies and merges are required today HSG write logging, after V7.3 –More CPU overhead

11 11 FC Switch DRM Configuration HSG FC host FC Switch HSG FC host FC Switch Cold stand-by nodes Host-to-Host cluster communication 100KM max

12 12 FC Switch DRM Configuration HSG Alpha FC Switch Host-to-Host (LAN/CI/DSSI/MC) HSG Alpha FC Switch FC (100 KM single mode) controller based remote copy set Cold stand-by nodes

13 13 DRM Pro and Con Pro –High performance, low latency –No shadow merges –Supported now, and enhancements are planned Con –Asymmetric access –Cold standby –Manual failover 15 min. Is typical

14 14 Storage Area Networks (SAN) Fibre channel, switches, HSG, together offer SAN capabilities –First components of Compaq’s ENSA vision Support non-cooperating heterogeneous and homogeneous operating systems, and multiple O.S. Cluster instances through –Switch zoning Controls which FC nodes can see each other Not required by OpenVMS –Selective Storage Presentation (SSP) HSG controls which FC hosts can access a storage unit Use an HSG access ID command More interoperability with support for transparent failover

15 15 Zoning, SSP, and Switch-to-Switch Fabrics SW HSG Sys1 Sys2 Sys3 Sys4 Zone A Zone B The HSG ensures that Sys1, Sys2 get one disk, and Sys3, Sys4 get the other

16 16 Cascaded SAN 8 Switch Cascaded 8x2 Switch Cascaded - 2 Fabrics  Well suited for applications where the majority of data access is local (eg.multiple Departmentals).  Scales easily for additional connectivity  Supports from 2 to 20 switches (~200 ports)  Supports centralized management and backup  Server/storage switch connectivity is optimized for higher performance  Design could be used for centralized or distributed access, provided that traffic patterns are well understood and factored into the design  Supports multiple fabrics for higher availabilities

17 17 Meshed SAN: 8 Switch Meshed 8x2 Switch Meshed - 2 Fabrics  Provides a higher availability since all switches are interconnected. Topology provides multiple paths between switches in case of (link) failure  Ideal for situations where data access is a mix of local and distributed requirements  Scales easily  Supports centralized management and backup  Supports from 2 to 20 switches  Supports multiple fabrics for higher availabilities

18 18 Ring SAN: 8 Switch Ring 8x2 Switch Ring - 2 Fabrics  Provides at least two paths to any given switch  Well suited for applications where data access is localized, yet provides the benefits of SAN integration to the whole Organization  Scaling is easy, logical and economical  Modular Design  Centralized management and backup  Non-disruptive expansion  Supports from 2 to 14 switches, and multiple fabrics

19 19 Skinny Tree Backbone SAN: 10 Switch Skinny Tree 10x2 Switch Skinny Tree 2 Fabrics  Highest fabric performance  Best for “many-to-many” connectivity and evenly distributed bandwidth throughout the fabric  Offers maximum flexibility for implementing mixed access types (local, distributed, centralized)  Supports centralized management and backup  Can be implemented across wide areas with interswitch distances up to 10 KM  Can be implemented with different availability levels, including multiple fabrics  Can be an upgrade path from other designs  Support 2 to 20 switches

20 20 Fibre Channel Tape Support (V7.3) Modular Data Router (FireFox) –Fibre Channel to parallel SCSI bridge –Connects to a single Fibre Channel port on a switch Multi-host, but not multi-path Can be served to the cluster via TMSCP Supported as a native VMS tape device by COPY, BACKUP, etc. ABS, MRU, SLS support is planned

21 21 Fibre Channel Tape Pictorial MDR (FireFox) FC Switch OpenVMS Alpha OpenVMS Alpha OpenVMS Alpha Tape Library Cluster host-to-host RAID Array Disk Controller OpenVMS Alpha or VAX

22 22 Fibre Channel Tape Support (V7.3) Planned device support –DLT 35/70 –TL891 –TL895 –ESL 9326D –SSL2020 (AIT drives 40/80) –New libraries with DLT8000 drives

23 23 Fibre Channel Tape Device Naming WWID uniquely identifies the device WWID-based device name SCSI mode page 83 or 80 $2$MGAn, where n is assigned sequentially Remembered in SYS$DEVICES.DAT Coordinated cluster-wide –Multiple system disks and SYS$DEVICES.DAT allowed

24 24 SCSI/Fibre Channel “Fast Path” (V7.3) Improves I/O scaling on SMP platforms –Moves I/O processing off the primary CPU –Reduces “hold time” of IOLOCK8 –Streamlines the normal I/O path –Pre-allocated “resource bundles” Round-robin CPU assignment of fast-path ports –CI, Fibre (KGPSA), parallel SCSI (KZPBA) Explicit controls available –SET DEVICE/PREFERRED_CPU –SYSGEN parameters fast_path fast_path_ports

25 25 FibreChannel 2001 Plans Multipath Failover to Served Paths –Current implementation supports failover amongst direct paths –High availability FC clusters want to be able to fail to served path when FC fails –Served path failover planned for V7.3-1 in late 2001

26 26 FibreChannel 2001 plans Expanded configurations –Greater than 20 switches per fabric –ATM Links –Larger DRM configurations

27 27 FibreChannel 2001 Plans HSG write logging –Mid/Late 2001 –Requires ACS 8.7

28 28 FibreChannel 2001 Plans 2Gb Links –End to end upgrade during 2001 –LP9002 (2Gb PCI adapter) –Pleadies 4 switch (16 2Gb ports) –HSVxxx (2Gb storage controller) 2Gb links to FC drives

29 29 FibreChannel 2001 Plans HSV Storage Controller –Follow on to HSG80/60 –Creates virtual volumes from physical storage –~2x HSG80 performance –248 physical FC drives (9TB) dual ported 15k rpm drives –2Gb interface to the fabric –2Gb interface to drives –Early Ship program Q3 2001

30 30 FibreChannel 2001 Plans SAN Management Appliance –NT based web server –Browser interface to SAN switches HSG60/80 HSV All future SAN based storage –Host based CLI interface also planned

31 31 FibreChannel Futures???? Low Cost Clusters –Low cost FC adapter –FC-AL switches –Low end storage arrays Native FC tapes Cluster traffic over FC Dynamic path balancing Dynamic volume expansion SMP distributed interrupts Multipath Tape Support IP over FC

32 32 Potential Backports FibreChannel Tapes MSCP Multipath Failover No Plans to backport SCSI or FC Fastpath

33 Fibre is good for you!


Download ppt "Washington DC Area Technical Update Update OpenVMS March 28, 2001 Brian Allison"

Similar presentations


Ads by Google