Download presentation
Presentation is loading. Please wait.
Published byHarvey Hawkins Modified over 7 years ago
1
Complete diagram set Perfect for IP Storage Education and Training
For more information visit:
2
Storage in a Wired World
Chapter 01 Storage in a Wired World
3
1-1: Components of Optimized Storage Deployment
Utilization and Efficiency Availability Networked Storage Architectures Business Continuity Practices Optimized Storage Deployment Think Defensively AND Offensively Storage Competence and Agility
4
1-2: Charting a Storage Networking Roadmap
Develop Skills Implement Resources Realize Capabilities Utilization and Efficiency Networked Storage Availability Roadmap Completion Business Continuity Practices Defensive AND Offensive Strategies Storage Competence and Agility
5
1-3: Target Audience for the Book
Research and Development Senior IT Management CIOs, CEOs Storage and Networking Professionals Straight to the Core Ambassadors CFOs, Financial Planners Industry Press and Analysts Senior Business Management Technology Investors Technology Responsibility Financial Responsibility
6
The Storage Architectural Landscape
Chapter 02 The Storage Architectural Landscape
7
2-1: Basic Storage Building Blocks
Operating System File System Volume Management Storage Devices (Disks, LUNs)
8
2-2: The SNIA shared storage model
Application File/record layer Database (DBMS) File system (FS) Block layer Services Host Storage domain Network Block aggregation This is the highest-level picture of the SNIA shared storage model. Note that applications lie outside the scope of the model – they are viewed here as “clients” (in the broadest sense) of the storage domain. There are three main components within the scope (each shows up on the animation): The file/record layer (databases and file systems). The block layer (starting from the low-level storage devices, and extending up to block-based aggregation). Services, including management of the other components. In what follows,each of these will be examined in more detail. Device Storage devices (disks, …) © Storage Networking Industry Association
9
2-3: SNIA shared storage model depicting direct-attach…
Application 1. Direct-attach 1 2 3 4 2. SAN-attach 3. NAS head Host Host 4. NAS server File/record layer Host. with LVM and software RAID Host. with LVM LAN NAS head NAS server Host block-aggregation Network block-aggregation SAN This picture puts together most of the previous examples. Block layer Device block-aggregation Disk array © Storage Networking Industry Association
10
2-4: Basic JBOD design Primary Loop Secondary Loop Disks
11
2-5: RAID Levels RAID 0 – Striping Data blocks written sequentially
Disk 1 Disk 2 Disk 3 RAID 0 – Striping Data blocks written sequentially Not really RAID – no redundancy Higher performance than single disk access Block 3 Block 2 Block 1 Disk 1 Disk1 Mirror RAID 1 – Mirroring Data blocks written to both disks at once 100% redundancy with 100% additional capacity required Reads can be distributed across both disks to increase performance Block 5 Block 3 Block 1 Block 6 Block 4 Block 2 Parity Disk 1 Disk 2 Disk 3 RAID 3 – Striping with Byte Parity Adds parity information to rebuild data in the event of disk failure High transfer rate and availability with lower capacity required than RAID 1 Transaction performance low because all disks operate in lockstep Block 3A Block 2A Block 1A Block 3B Block 2B Block 1B Parity Disk 1 Disk 2 Disk 3 RAID 4 – Striping with Block Parity – Independently accessible disks Data blocks written sequentially to each disk in the array Parity still protects data from disk failure Dedicated parity disk is write bottleneck and leads to poor performance Parity Block 3 Block 1 Block 5 Block 2 Block 6 Block 4 Disk 1 Disk 2 Disk 3 RAID 5 – Striping with Rotational Parity Parity blocks written per row and distributed across all disks Parity distribution eliminates single write bottleneck Overhead for parity calculation on writes supplemented with parallel microprocessors or caching
12
2-6: Direct Attached Storage
Local Area Network Servers Pros: Easily configured External storage extends server life SCSI or Fibre Channel server-to-storage connection Cons: Storage expansion incurs costs of added servers Limited scalability No resource sharing Single point of failure High-speed SCSI or Fibre Channel interconnect Storage Devices
13
2-7: Comparing SAN and NAS
Windows File System Sun File System IBM File System Block commands to captive disks File Level Commands (NFS, CIFS) Common File System Windows Sun IBM Disk Array NAS Filer SAN–attached disk NAS–disk
14
2-8: Network attached storage
Local Area Network Network- Attached Storage Servers Pros: Uses standard Ethernet and IP Optimized file handling performance Improved scalability over DAS Cons: Doesn’t alleviate LAN congestion Disruptive “block” storage expansion/maintenance Filer “owns” storage resource Unique OS NAS Filers Tape Library Storage Devices RAID
15
2-9: Basics of Fibre Channel Protocol (FCP)
FCP is an approved ANSI SCSI serialization standard to transmit SCSI commands, data, and status information between a SCSI initiator and SCSI target on a serial link. Fibre Channel Protocol FCP SCSI Command Set SCSI SCSI Bus Protocol Fibre Channel Network SCSI Core command set for hosts and device communication FCP Layer Routable packaging (SCSI serialization) for network connectivity
16
2-10: Fibre Channel Storage Area Networks
Local Area Network Pros: Gigabit speed performance Distance storage connectivity ~10KM Scales to thousands of storage nodes Enables storage resource sharing Cons: Unresolved Fibre Channel interoperability issues Lack of customer Fibre Channel trained staff Introduces second network technology Requires separate management system Fibre Channel Servers Fibre Channel Storage Area Network Storage Devices
17
2-11: Basics of Internet Small Computer Systems Interface
Internet Protocol IP The iSCSI protocol transmits SCSI commands, data, and status information between a SCSI initiator and SCSI target on TCP/IP link. iSCSI Protocol iSCSI SCSI Command Set SCSI SCSI Bus Protocol Serial SCSI-3 Transport TCP/IP Network (SAN, LAN, MAN, WAN) SCSI Core command set for hosts and device communication iSCSI Layer Routable packaging (SCSI serialization) for network connectivity IP Layer Universal connectivity to global IP infrastructure
18
2-12: Pure iSCSI storage networks
Local Area Network Pros: Common IP and Gigabit Ethernet network platform for both LAN and SAN Gigabit speed performance Unified management structure Extensible to WAN and MAN Optimized service provider architecture Extensive base of IP and Gigabit Ethernet trained personnel Well defined roadmap to 10 Gbps and beyond Cons: Relatively new products Emerging IP Storage standards Replacement of end devices (storage and adapters) iSCSI Servers Campus, MAN, WAN Pure iSCSI SAN Ethernet and IP Network iSCSI Storage Devices
19
2-13: NICs, HBAs, and IP storage adapters
Traditional IP (LAN traffic, NAS) Traditional Storage (Fibre Channel based) IP Storage (Blocks on Ethernet) File Server Block Server Block Server Application Layer Protocols LAN messaging, NFS, CIFS Fibre Channel Protocol iSCSI 1.5K Frame 1.5K Frame Block Block Driver Layer 1.5K Frame 1.5K Frame 1.5K Frame 1.5K Frame Link Layer 1.5K Frame 1.5K Frame 1.5K Frame 2K Frame 2K Frame 2K Frame 1.5K Frame 1.5K Frame 1.5K Frame On Ethernet On Fibre Channel On Ethernet Currently, the TCP/IP segmentation resides in software, not on the adapter. IP Storage can operate like this, but will be inherently slow and have high CPU utilization Fibre Channel avoids the slowdowns and CPU utilization by handling the segmentation in hardware on the adapter New IP Storage Adapters such as iSCSI will also handle the segmentation in hardware on the adapter through TCP offloading and acceleration
20
2-14: Using IP to enhance Fibre Channel
Access IP Core Traffic engineering Routing Global scale of IP FC Directors FC Switches / SANs IP Storage Distribution IP Core IP Storage Distribution Layer IP enhancement of FC storage Amplifies Fibre Channel reach and capabilities Protects both IP and Fibre Channel existing infrastructure FC JBODs IP Storage Switches or Gateways Layer 3 Routers Chassis-based Ethernet Switches Optical Ethernet, DWDM FC Tapes FC Loop Fibre Channel Access End-device aggregation Direct-attach focus Fan-out of storage ports FC Servers FC RAIDs
21
2-15: Mechanisms for linking FC SANs and devices with IP
* with IP Storage Switch or Gateway IP Core IP IP Fibre Channel Fibre Channel
22
2-16: Mechanisms for linking FC SANs to iSCSI with IP
iSCSI SAN IP IP IP * with IP Storage Switch or Gateway IP Core IP IP Fibre Channel
23
2-17: iSCSI and iFCP/FCIP to connect two FC end nodes
Currently, most FC-2 control messages* do not have an iSCSI equivalent, and the original information cannot be reconstructed from iSCSI. iFCP and FCIP provide full transparency for Control Messages, and may provide more resiliency for Fibre Channel applications heavily reliant on Fibre Channel infrastructure and error recovery mechanisms. FC FCP SCSI IP iFCP/ FCIP iFCP/ FCIP *e.g., RES (Read Exchange Status Block), RSI (Request Sequence Initiative), TPRLO (Third Party Process Logout), RSS (Read Sequence Status Block)
24
2-18: Protocol Options for IP storage
FCIP iFCP iSCSI Devices: iSCSI/IP Fibre Channel Fibre Channel Fabric Services: Internet Protocol Internet Protocol Fibre Channel IP FC FC IP FC FC FC SAN IP or FC SAN IP SAN IP Network FC SAN IP or FC SAN IP SAN IP IP FC FC FC FC
25
Chapter 03 The Software Spectrum
26
3-1: Core components of storage management
27
3-2: Software elements of storage management
Infrastructure Management Platform Focus Transaction Management Application Focus Disaster Recovery Management Availability Focus SAN Management Data Coordination Data Protection Resource Management Policy Management High Availability Virtualization
28
3-3: Storage management software components in the enterprise
MGMT 1 SAN Management Resource 2 Management Storage Area Network 5 Virtualization Policy- 4 Based Management 3 Data Coordination and Protection
29
3-4: Typical storage networking zone configuration
Storage Area Network Zone Configuration Zones Devices Zone 1 Windows Host A HBA JBOD-A (Drive 0) JBOD-A (Drive 1) DLT 8000 (Tape Library) Zone 2 Unix Host B HBA JBOD-B (Drive 0) JBOD-B (Drive 1) JBOD-A (Drive 0) JBOD-A (Drive 1) JBOD-B (Drive 0) JBOD-B (Drive 1) DLT 8000 (Tape Library) Windows Host A HBA Unix Host B HBA
30
3-5: Topology view of a storage network
Alpha Alpha Alpha Alpha HP9000 HP9000 HP9000 RS/6000 Switch 3 Switch 2 Switch 1 IBM ESS Compaq Tape Library HP XP512 HP XP512 IBM ESS
31
3-6: Types of IP and Fibre Channel protocol conversion
FC Domain IP Domain IP Storage Switches and Gateways 1. Multiprotocol Conversion (iSCSI-FC) FC Servers iSCSI Servers 1 FC Storage (End Node) iSCSI Storage (End Node) 2 2. Device-to-device interconnect (iSCSI-FC) for subsystem mirroring applications 3. SAN extension / interconnect (iSCSI-FC) FC Fabric/ Network (FC E_Port) FC Switch FC Switch IP Switch IP Fabric/ Network (Native IP) 3 FC Switch FC Switch 4 4. Protocol extension iFCP and FCIP FC Switch FC Switch IP Switch FC Switch FC Switch
32
3-7: The data coordination path
Operating System File Systems / File Services Storage Aggregation Volume Management Storage Devices (Disks, LUNs)
33
3-8: Storage policies Storage Policies Security and Authentication
Capacity, Content, and Quota Management Quality of Service for Storage and SANs
34
3-9: Storage policy cycle
Start Storage Policy Cycle
35
3-10: Types of Storage Encryption
Fabric Attached Disk FC Switch Encryption FC Switch Tape 3rd Party Services Subsystem Attached Disk FC Switch Encryption Tape Offsite Vaulting Remote Mirrored Disk Gateway / Tunnel Encryption Primary Site Gateways Gateways Encryption Secondary Site Source: NeoScale Systems Disk Application Attached FC Switch Encryption Virtualized Storage Application Server
36
3-11: Storage transport quality of service
Non-critical file backup Online transaction processing backup receives 50 MB/s guaranteed throughput 50 MB/s MB/s Oracle backup initiated 50MB/s guaranteed Non-prioritized backup uses remaining available bandwidth $ Oracle OLTP Database Time Leverage Ethernet Standards for IP Storage Prioritization 802.1Q VLANs to segregate traffic streams 802.1p/Q prioritization to guarantee delivery of mission-critical storage data
37
3-12: Cost-availability tradeoff
Mirroring Replication Snapshot Disk Backup Tape Vaulting (Onsite) Tape Vaulting (Offsite) Days Hours Minutes Seconds Availability (Backup and Recovery Time)
38
3-13: Consolidated tape backup across servers and NAS
Disk Disk Tape
39
3-14: Serverless-backup through 3rd party copy
Servers NAS Backup initiation commands Third-party copy Data movement (third-party copy) Disk Disk Tape
40
3-15: Comparisons of media technologies: disk, optical, tape
Cost per Megabyte High-end Disk Midrange Disk Optical Tape (Onsite) Tape (Offsite) Retrieval Time
41
3-16: Comparing asynchronous and synchronous replication
Asynchronous Replication Synchronous Replication Primary Host Secondary Host Primary Host Secondary Host Write to primary I/O completion – application posted Write to secondary Write complete on secondary Write to primary Write to secondary Write complete on secondary I/O completion – application posted 1 1 2 4 3 2 4 3 Primary Logical Volume Secondary Logical Volume Primary Logical Volume Secondary Logical Volume
42
3-17: Virtualization in homogeneous and heterogeneous environments
Virtualized Storage Heterogeneous Virtualized Storage A B
43
3-18: Areas of virtualization deployment: host, fabric, subsystems
44
3-19: Types of fabric virtualization
Out-of-band Speed* Scalability In-band Appliance- based Switch- based Location *defined as lower latency
45
3-20: Interoperability strategies and challenges for virtualization
Host Interoperability Challenge Interoperability Strategy Fabric Subsystem
46
Storage System Control Points Intelligence in the Storage Area Network
Chapter 04 Storage System Control Points Intelligence in the Storage Area Network
47
4-1: Storage Services within the Storage Software Landscape
See slide 3-2 for greyed areas Infrastructure Platform Focus Transaction Application Focus Disaster Recovery Availability Focus SAN Management Data Coordination Data Protection Storage Service Manageability Capacity Recoverability Sample Implementation e.g., simplified administration e.g., virtualization e.g., point-in-time copies Resource Management Policy Management High Availability Storage Service Performance Security Availability Sample Implementation e.g., increase I/O beyond single disk with RAID 0,5 e.g., access control e.g., beyond disk failure – mirroring
48
4-2: Potential Storage Services Locations
Area of Intelligence Hardware Platform Host Servers Mainframes Fabric Appliances Switches Directors Subsystem Disks/RAIDs/Controllers Filers Tape Libraries
49
4-3: Historical trends across computing and storage infrastructure design
Computing Architecture Mainframes Minicomputers and Clustering Client-server Distributed Web, App, DB servers Storage Access Direct Cluster Small network Global network Storage Nodes < 5 5—10 10—1000 > 1000 Storage Architecture Centralized Decentralized Split Distributed Storage Capacity Tape, RAMAC Large disks and RAID Storage islands Virtualized storage pools Storage Location Data center Data center and beyond PCs and application servers Anywhere
50
4-4: Storage services require size, speed, and distance scalability
Storage services require scalability in: Manageability Performance Capacity Security Recoverability Availability Size Speed # of nodes Distance throughput reach
51
Distributed Routing Table
4-5: Routers progressed from single to distributed tables for greater speed and performance Single Routing Table Distributed Routing Table Router Router Routing tables Routing tables Ports Ports Ports Ports
52
4-6: Using distributed storage services for optimized deployments
Single Services Location Distributed Services Locations Ingress Ingress Services Services Services Services Services Egress Egress
53
4-7: Optimized network flow through distributed services
Ingress Egress Services Services Intelligent Nodes with Distributed Services Egress Services Services Egress With distributed systems, any ingress connects directly to any egress Direct paths are enabled as each node has embedded services intelligence
54
4-8: Optimizing Network Efficiency
Network Optimization / Efficiency Good Pushing replication to appropriate end nodes minimizes network traffic and maximizes network efficiency Poor Close Far Replication proximity to destination nodes
55
Network segments used to complete operation
4-9: Simplifying network usage models with network-based storage services Host-based RAID 1 Target-based RAID 1 Network-based RAID 1 4 3 Mirroring 2x x x Mirroring x x 2x x x x Mirroring Network segments used to complete operation
56
4-10: Network based services provide full access to storage attributes
Mirror Host Services Services Target Network-based storage services have direct access to all network points and end nodes Host may never know if a mirror is substituted for corrupt disk
57
4-11: Optimal locations for storage services
Implementation Optimal Location Reasons Manageability Attribute-based storage Application coordination Network Host Visibility, access, knowledge, reactive capability Capacity LUN creation Volume Target Host, network, target Disk aggregation Resizing most easily at host Recoverability Packet recovery Point-in-time copy Disk error Target or network` Proximity Minimize copy traffic Recovery speed Performance RAID 0,5 Data movement Balance target throughput with network Minimize usage Security Access control Knows ingress, egress points Availability RAID 1 Dynamic multipath hosting Depends: network location optimizes bandwidth Similar to multicasting OS integration
58
4-12: Primary computing functions
memory cpu I/O network New scalability model 4 major components of computing architecture Memory - look at Rambus and the perf. That memory gives us. High speed, low latency external CPU cache CPU - doubling every 18 months Core processing engine I/O - (on ramp/access) lower speed, high capacity data storage silo Network - 10 GE and above client access, peer access (on ramp/access)
59
4-13: The gap between computing resources
Networking and I/O have failed to keep par with memory and the CPU Memory - fast and free growing in capacity and speed CPU - fast and free bottlenecks are the IPC technologies such as PCI Network - fast and free I/O - storage fast and free, but I/O on ramps still struggling I/O and Networking access ramps have not kept up to par with memory and CPU performance example 1 Gigabit Ethernet. If network traffic was at 1Gpbs, CPU is 100% utilized on network traffic alone. The PCI bus alone is busy just trying to keep up with the network. memory cpu I/O network
60
4-14: I/O and networking increase speed and converge architecturally
Prior Thinking Current Thinking I/O cpu network cpu I/O network Just as we are learning to bring networking technology back into the computer, we are learning to apply networking technology into I/O. Once again, Networking technology provides leadership to the rest of the system. memory memory
61
4-15: Leveling the interconnect playing field
Network Scalability (size, speed, distance) Greater Bandwidth Enables Move to I/O Space A B Network Space Absorbs I/O Ethernet Efficiencies and Capabilities Move to WAN C D
62
4-16: Balancing Multi-Vendor Configurations
Host Current Competition Future Competition Fabric Subsystem
63
Reaping Value from Storage Area Networks
Chapter 05 Reaping Value from Storage Area Networks
64
5-1: Comparing defensive and offensive storage networking strategies
Defensive Strategies Offensive Strategies Risk management Short-term cost savings Focus on current assets Conventional platforms Data Preservation Operational agility and flexibility Medium- to long-term cost savings Focus on current and future assets Emerging platforms Enhance enterprise market position
65
5-2: Storage growth increases risk and total cost of ownership (TCO)
Defensive Strategies (Drive down risk) Offensive Strategies (Drive down TCO) Risk TCO Storage growth increases TCO Storage growth increases risk Time Time
66
5-3: Data availability through redundant paths in a simple SAN
Servers Storage Area Network Storage Devices
67
5-4: High availability through director-class switches and scalable fabrics
Servers Storage Area Network Core Fibre Channel Directors OR Gigabit Ethernet Switches Storage Devices
68
5-5: Incorporating remote sites to storage networks
Servers Remote Site Storage Area Network Remote Network Storage Devices
69
5-6: Sample customer configuration with a redundant core and remote storage
Legend Server FC or iSCSI Storage FC or iSCSI Core Switch Switch Primary Site Storage Remote Network Remote Site
70
5-7: Storage networks facilitate storage management savings
Direct-attached Storage Management SAN-attached Storage Management Servers Servers Storage Management Storage Management Storage Devices Storage Devices
71
5-8: IP networking extends across traditional data and new IP storage applications
IP network provides flexibility and operational agility across all installations Centralization of network technology reduces management and administration cost FC SAN with IP Storage Gateways Mixed SAN with IP Storage Switches or Routers LAN Clients NAS Clients iSCSI Servers IP Network LAN Servers NAS Servers FC SAN with IP Storage Gateways Mixed SAN with IP Storage Switches or Routers iSCSI Storage
72
5-9: Conventional Fibre Channel storage area network and Ethernet/IP local area network
Clients Dual SAN and LAN / NAS fabrics cost more to install and operate Having IP and Fibre Channel fabrics for storage requires dual management systems extra hardware duplicate staffs and training Fibre Channel fabric has no transition path to native IP end systems IP Network (LAN) LAN / NAS Access Network- Attached Storage FC SAN Access FC Fabric FC FC Storage FC Storage FC Storage
73
5-10: Introduction of IP storage fabric and iSCSI
Clients IP storage fabric complements or replaces FC fabric Functionality identical to that of Fibre Channel switches, but with more flexibility Non-blocking, IP storage fabric provides full support for iSCSI and FC, with wire-speed conversion between the two protocols IP storage fabric supports gradual or immediate shift to iSCSI end systems IP Network (LAN) LAN / NAS Access Network- Attached Storage iSCSI or FC SAN Access IP Storage Fabric IP MAN / WAN FC FC Switches iSCSI Storage
74
5-11: Integrated SAN and NAS
Clients Enhanced Functionality of Integrated SAN and NAS Solution Optional use of IP storage fabric for both SAN (block-based) and NAS (file-based) IS-NICs provide access for all end systems (SAN, NAS, and LAN clients) IP storage switches provide wire-speed, non-blocking IP storage fabric for all IP and Fibre Channel devices FC Server Servers with IS-NICs Network- Attached Storage SAN / NAS Access (iSCSI, FC, NFS, CIFS) IP Storage Fabric IP MAN / WAN Hybrid NAS/SAN Subsystems iSCSI Storage FC Storage FC Storage
75
5-12: Segmenting the IP storage fabric from the IP network backbone
Fibre Channel Servers iSCSI Servers NAS Client IP Storage Fabric Legacy Fibre Channel Fabric IP Network Backbone Fibre Channel Servers Fibre Channel Storage Server Farm End User Access Voice Fibre Channel Storage iSCSI Storage NAS Server
76
5-13: Framework for multi-layered storage fabrics
iSCSI/IP Servers IP Core Traffic engineering Routing Global scale of IP FC Switches / SANs IP Storage Distribution Layer IP enhancement of FC storage Amplifies Fibre Channel reach and capabilities Protects both IP and Fibre Channel existing infrastructure Layer 3 routers Chassis-based Ethernet switches Optical Ethernet, DWDM FC Storage IP Storage Switches or Gateways FC Tapes Multifunction Devices including NAS Access Layer End-device aggregation Direct-attach focus Fan-out of storage ports FC Servers iSCSI/IP Storage
77
5-14: Allocating the IP storage fabric among storage platforms
IP Core and Distribution serve multiple storage platforms and can be allocated as needed IP Core IP Storage Distribution Layer Multifunction devices including NAS FC Switches / SANs iSCSI Storage devices
78
5-15: Setting information technology business priorities
Information Technology Investment Priorities Defend, support, expand current business Develop new business Enhance IT capabilities and expertise First Priority Second Priority Third Priority Long-term planning cycle and feedback loop (Third priority can’t be ignored, but may have lower spending)
79
Information Technology Group Integrated service offering
5-16: Creating strategic and operational links between IT groups and business units CEO and Financial Planning Group Information Technology Group Business Units Strategic Links Operational Links Integrated service offering Immediate Strategic Requirement
80
5-17: Storage technologies and opportunities for competitive advantage
Warning for present Exploit competitive advantage Leading and Emerging e.g., Mulitprotocol SANs Warning for future Develop competitive advantage Competitive and Developing e.g., conventional SAN/NAS Warning for survival! Warning for waste of resources? Basic and Commoditized e.g., RAID, tape backup Weak Strong Competitive Strength
81
Business Continuity for Mission Critical Applications
Chapter 06 Business Continuity for Mission Critical Applications
82
6-1: Increasing levels of availability requirements
1. Backup 2. Disk Redundancy 3. Failover 4. Point-in-Time Copy 5. Wide-Area Replication 6. Wide-Area Replication and Failover
83
6-2: Sample storage application placements against RPO and RTO
Backup Weeks Days Asynchronous Replication Recovery Point Objective Minutes Local Clustering Synchronous Replication Seconds Seconds Minutes Days Weeks Recovery Time Objective
84
6-3: Single host backup method and using mirroring prior to backup
Backup Server Application Application Quiesce application Flush cached data Take application off hot backup mode Attach mirror to backup host Reattach mirror to primary upon backup completion Mirror Application server processes the backup operation, including primary data set Backup server processes the backup operation using mirrored data set
85
6-4: Comparing RAID 0+1 and 1+0
Mirror Mirror Mirror Stripe A Mirror Stripe Stripe B lose 1 disk entire stripe is lost shift to mirror no room for error lose 1 disk disk is lost continue to run stripe disk failure in another mirror non-impacting
86
6-5: An example point-in-time snapshot copy method
12:00pm Snapshot taken 12:05pm C becomes C1 12:30pm Snapshot taken Data Snapshot Data Snapshot Data Snapshot File System A … A … A … .. … B B B C C 2 C1 … C1 D D C D 1 Snapshot taken with references to original data Copy C Update data C to C1 Original C is available through 12:00pm snapshot (must be retained)
87
6-6: Examples of clustering and high-availability storage networking solutions
Failover and High-Availability Multiple Application Failover Simultaneous Data Access Standby Data Access Standby Data Access Application Instance 1 Application Instance2 Application Instance Application Standby App 1 App 2 App 3 App 1−3 Standby 1 2 3
88
6-7: Off-site vaulting delivers geographic disaster tolerance
Data Center 1 Data Center 2 Location A Location B (Media Warehouse)
89
6-8: Intelligence required to avoid corruption during asynchronous replication
Database Standby Database Database Standby Database 8KB Data Block Block 1 Block 2 4KB Committed Block 2 Committed Block 1 4KB Incomplete Potential corruption due to partial write acknowledgment Potential corruption without maintaining write order fidelity
90
6-9: Data consistency chart
Replication Modes Synchronous Asynchronous Confirm no partial writes acknowledged In-order writes Full block writes No May need point-in-time copy at second site Yes Yes Data consistent at second site Data consistent at second site
91
6-10: Stack layers for various replication technologies
Replication Layer Stack Sample Application Pros Cons Custom applications Data integrity easily maintained at second site Application-specific Oracle Dataguard, Sybase replication, Quest Shareplex Database administrator-friendly Does not replicate binaries or application components NetApp SnapMirror, VERITAS Storage Replicator, NSI Doubletake Active-active mode accessible during replication (non-database) Not available for raw-access database applications VERITAS Volume Replicator, Sun SNDR Replicates all elements Host support limitations EMC SRDF, HDS TrueCopy, XIOtech REDI SAN Links Multihost support Storage device compatibility issues; may not operate across more than two arrays Application Database Host File System Logical Volumes LUNs Storage Array
92
6-11: Sample operation of database transaction log replication
Standby Database Apply transaction logs periodically. Point-in-time copy is the transaction log last copy/replication.
93
6-12: Layers of storage redundancy and sample enterprise applications
Redundancy Layer Sample Enterprise Applications Load balancers Web servers Application servers Database servers Storage area network Storage subsystems Internal Web Sites (tied to back-end ERP databases) (e-commerce, ERP with front-end Web-services) End-to-End Applications Standalone Database Ethernet Core Database
94
Options for Operational Agility
Chapter 07 Options for Operational Agility
95
7-1: Increases in complexity of solution create value and need for outsourcing
HW, SW and Infrastructure, People, Management Software Value People Software and Infrastructure Hardware Complexity
96
Storage Service Provider Capabilities
7-2: Service offerings for a across storage and network service providers Storage Service Provider Capabilities Backup/Restore Primary SAN Primary NAS Full-service SSP offerings Remote Tape Remote SAN Remote NAS Network service provider offerings Service Offerings Surge Capacity Mirroring Content Delivery Accessory services Onsite Management Offsite Management Tape Vaulting A la carte management options
97
7-3: The effect of storage management software on the deployed infrastructure
Normalized Infrastructure SLA Readiness Normalization Gap
98
7-4: The effect of provisioning capacity to meet short-term capacity requirements
Capacity (GB) Original Purchase Plan Purchase 1 $ Savings* Purchase Plan w/Terabyte Rental Purchase 2 Time * less terabyte rental fees
99
7-5: A comparison of the activities between onsite and offsite storage management
Onsite Storage Management Offsite Storage Management Value HW, SW and Infrastructure, People, Management Software Value HW, SW and Infrastructure, People, Management Software People People Software and Infrastructure Software and Infrastructure Hardware Hardware Complexity Complexity
100
7-6: The Storage SLA Process
Plan Requirements Assessment Develop SLA Execute Final Design Build Out Activation Manage NOC Management Change Management Ongoing Activities
101
Reactive Management and Troubleshooting
7-7: Description of the relationship between centralization phases and roles Phase Roles Initial Quality Assurance Proactive Management Asset Tracking Change Management Reactive Management and Troubleshooting Event Management Problem Management
102
7-8: Lack of disruption when changing the application server
Application Transition App 1 App 2 App 3 App 1 App 2 App 3 App 2 Network Layer Network Layer
103
Initiating Deployment
Chapter 08 Initiating Deployment
104
Intelligent Storage Node
8-1: Commoditization and open systems drive fluidity of the end-to-end storage chain Network I/O Device I/O Intelligent Storage Node Drive Network Blade Server I/O HBA Enabler PCI-X PCI-Express iSCSI IP Storage Protocols x86-based Devices Serial ATA, IDE
105
8-2: Applying fluidity to the IP Storage Model
Platform Convergence Drivers (NAS, SAN, Object) Layered Model IP Core / Backbone Storage Distribution Layer Intelligent Storage Nodes Distributed Services Access End-Device Aggregation Ethernet and IP Networks, IP Storage Protocols Rack-Mounted x86 Servers PCI-X, PCI-Express Serial ATA, IDE IP Core / Backbone Intelligent Storage Nodes Servers IP Core / Backbone Traffic Engineering Routing Global Scale of IP IP Storage Distribution Layer IP Enhancement of Storage Amplifies Storage Reach and Capabilities Protects Both IP and Fibre Channel Existing Infrastructure Access End Device Aggregation Direct Attach Focus Fan Out of Storage Ports Storage Distribution Storage Access
106
8-3: Locations for storage intelligence
Area of Intelligence Hardware Platform Host Servers Mainframes Fabric Appliances Switches Directors Subsystem Disks/RAIDs/Controllers Filers Tape Libraries
107
Areas of Storage Intelligence (Host, Fabric, Subsystem)
8-4: Overlaying host, fabric, and subsystem intelligence with the IP storage model Host Access Storage Distribution Fabric IP Core / Backbone Storage Distribution Subsystem The Access Layer Direct Attach Focus End Device Aggregation Fan Out of Storage Ports Access Interconnects Fibre Channel iSCSI Parallel SCSI, InfiniBand, ESCON / FICON Access Devices FC Switches and Directors Servers RAIDs and Tape Libraries FC Loops Access Network Attached Storage (Server to Filer) SAN Storage Consolidation (ISCSI converted to FC) Remote SAN Connection (FC-IP-FC) Areas of Storage Intelligence (Host, Fabric, Subsystem)
108
8-5: Minimizing improvement costs through product consolidation
Storage Area Networks Network- Attached Storage Object- Oriented Storage Access Hosts Subsystem Product Product Product Distribution Intelligent Storage Nodes Product Product Product Core Switches, Directors Product Product Product Product Consolidation Opportunities Conventional Fibre Channel SAN Fibre Channel SAN Extension Network Attached Storage Object Oriented Storage FC Servers Disk Arrays FC Servers Disk Arrays NAS Clients and Servers Clients and Servers SAN Appliance IP Storage Router Transparent Index or Content Nodes FC Switches IP Core IP Core IP Core
109
8-8: Using IP networking for SAN to SAN remote mirroring
Campus C IP SAN Campus A Campus B High Availability IP MAN or WAN FC SAN FC SAN IP Storage Switch or Gateway IP Storage Switch or Gateway
110
8-9: Using an IP core for storage consolidation
Native IP Devices FC SAN . FC SAN IP Storage Switch or Gateway IP Storage Switch or Gateway Native IP Devices FC Devices
111
Storage Service Provider
8-10: Metropolitan area IP connectivity for outsourced storage services Customer B ON PREMISES Customer C ON PREMISES Customer A ON PREMISES Customer D ON PREMISES FC SAN FC SAN FC SAN FC SAN MAN/WAN Storage Service Provider Primary Storage Collocation Center
112
8-11: The transition to reference information
Source: The Enterprise Storage Group, April 2002
113
8-12: Comparing dynamic and fix content characteristics
OLTP, ERP, Data Warehousing … Fixed Content: , Images, Documents … Throughput Block and file I/O Retrieval Speed Metadata query processing Availability % uptime 99% uptime Total Cost of Ownership Component versus component cost Greater upstream and downstream cost implications (e.g., more exchange servers required to point to data)
114
8-14: Implementing a distributed system for email storage
Servers Servers . . Intelligent Storage Nodes Storage Storage
115
8-15: Distributed intelligent storage nodes for NAS scalability
Servers Servers . . Intelligent Storage Nodes Storage Storage
116
Assessing Network Connectivity
Chapter 09 Assessing Network Connectivity
117
9-1: Cargo and Data Transport Analogies
Empty Railcars on a Railroad SONET Frames on Fiber-Optic Cable Layer One SONET Mux SONET Mux Containers on Trailers on Railcars IP Packets in Ethernet Frames on SONET Layers One, Two, and Three Ethernet Frames SONET Mux SONET Mux IP Packet IP Packet Containers Placed Directly on Railcars IP Packets Directly on SONET Layers One and Three SONET Mux SONET Mux IP Packet IP Packet
118
9-2: Ethernet Interfaces on Multiplexers Enable Direct Link from LAN Switches
Ethernet LAN (100 or 1000 Mbps) Ethernet LAN (100 or 1000 Mbps) Switch Switch Switch Switch Switch Ethernet Link (100 or 1000 Mbps) Switch SONET / SDH Mux Network (622 or 2488 Mbps) Ethernet Link (100 or 1000 Mbps) Router Router SONET / SDH (155 or 622 Mbps) Router Router SONET / SDH (155 or 622 Mbps) Packet over SONET Router Ethernet LAN (100 or 1000 Mbps) Ethernet LAN (100 or 1000 Mbps) Switch Switch Switch Switch Switch Switch Ethernet Link (100 or 1000 Mbps) SONET / SDH Mux Network (622 or 2488 Mbps) Ethernet Link (100 or 1000 Mbps) Direct Ethernet Link to Muxes
119
9-3: File-based Backup Locally, via the LAN, and Remotely, via the Existing Data Network
Ethernet LAN Clients and Servers (File-based) Ethernet LAN Clients and Servers (File-based) SONET / SDH Mux Network LAN Router Router LAN Servers Convert between Files (IP and Ethernet) and Blocks (Fibre Channel or SCSI) External Storage (Block-based) External Storage (Block-based)
120
Separate Dedicated Link
9-4: Block-based Backup Locally, via the SAN, and Remotely, via Dedicated Fiber Ethernet LAN Clients and Servers (File-based) Ethernet LAN Clients and Servers (File-based) Separate Mux Circuit (Fibre Channel) SONET / SDH Mux Network LAN Router Router LAN SAN SAN Separate Dedicated Link (Fibre Channel) External Storage (Block-based) External Storage (Block-based)
121
9-5: Rates for SONET and SDH transmissions
SONET (North America) SDH (Rest of World) Rate OC-3c STM-1 155Mbps OC-12c STM-4 622Mbps OC-48c STM-16 2488Mbps (2.5Gbps) OC-192c STM-64 9953Mbps (10Gbps)
122
SAN can be Fibre Channel,
9-6: Block-based Backup Locally, via the SAN, and Remotely, via the Existing SONET Network, using the IP Storage Protocol Ethernet LAN Clients and Servers (File-based) Ethernet LAN Clients and Servers (File-based) SONET / SDH Mux Network LAN Router Router LAN If SAN is Fibre Channel, a Separate Switch is used to convert to IP SAN SAN SAN can be Fibre Channel, iSCSI, or iFCP External Storage (Block-based)
123
9-7: IP Storage Protocol Recommendations for SANs, Metro, and Wide Area Connections
Fibre Channel to Fibre Channel (MAN and WAN) Fibre Channel to Fibre Channel (Data Center SANs) Fibre Channel to IP Storage IP Storage to IP Storage FC IP FC IP FC IP IPS IPS IP FCIP (Tunneling) (No specifications) (No specifications) (No specifications) iFCP (Native IP) (iSCSI recommended) (iSCSI recommended) iSCSI (Native IP) (No specifications) (No specifications)
124
Separate switch converts
9-8: Native IP SANs can be Interconnected across Metro and Wide Areas with no Additional Conversion Equipment SONET / SDH Mux Network LAN Router Router LAN IP IP Fibre Channel SAN Fibre Channel SAN Separate switch converts FC into IP format (FCIP or iFCP) SONET / SDH Mux Network LAN Router Router LAN IP IP IP SAN IP SAN No separate switch is needed
125
9-9: IP SANs can be Expanded Easily, using Standard Gigabit Ethernet Core Switches
SONET / SDH Mux Network LAN Router IP Fibre Channel or iSCSI Server Links iFCP or iSCSI links between the storage switches and the core switches Multiprotocol Storage Switches and Gigabit Ethernet Core Switches iSCSI can be connected directly to the core switches or through the storage switches Fibre Channel or iSCSI Storage Links
126
9-10: Service Provider Connections Vary by Subscriber Density and Location
Subscribers A, B, C Data Centers in Multi-tenant Office Building Subscriber A Backup Center in Suburban Office Building Subscriber B Backup Center Center in Asia A A B New Cable Installed to Nearest Mux Riser Cables B CLE SONET / SDH Mux Network International WAN C Underground Vault by Building C Subscriber C Backup Center in Nearby Office Building
127
9-11: Areas for IP Storage Security Solutions
Device I/O Fibre Channel HBAs and native IP (iSCSI) HBAs VPN at Data Center Egress Minimum requirement for IP storage security SAN VPN SAN Interconnect IP and Ethernet MANs and WANs VPN VPN SAN Data Center Network (SAN) Fibre Channel switches, IP storage switches, or IP switches
128
9-12: Sample security implementation across data center, metro, and wide area networks
Metro Area Network Wide Area Network Network Technology FC, Gigabit Ethernet / IP Layer 2 / 3 FC, Gigabit Ethernet / IP Layer 2 / 3 IP over Ethernet Layer 3 Network Characteristics Local Area Connectivity Dedicated or provisioned bandwidth Shared network Security Implementation Options MAC level IPSec at end device FC encryption MAC level IPSec at end device or egress point FC encryption VPN VPN
129
Long Distance Storage Networking Applications
Chapter 10 Long Distance Storage Networking Applications
130
10-1: Remote data replication and centralized backup
Primary Data Center in London Metro Area Backup Data Center in Reading Storage Consolidation in London and Vancouver Data Centers
131
10-2: IP network performance is close to light speed and packet loss is near zero
Source: AT&T Website
132
10-3: Estimating the amount of data to be mirrored
Determine the amount of data stored on the primary storage array. Estimate the portion of the primary storage that is changed on a peak day. For example, the peak day might be a certain day of the week or the last day of each fiscal quarter. If the activity on that day is concentrated mainly into, say, eight or ten hours, then an average hourly data rate for the peak day should be calculated over that amount of time rather than over 24 hours. If there is a peak hour during the peak business day, it should be used for the estimate rather than simply taking the average rate. For example, if from 1:00 to 2:00 p.m., the data rate is two or three times the average, the circuit should be sized for that rate.
133
10-4: Numerical example of peak hour data activity
The usable capacity of the primary storage array is 12 TB (terabytes). However, the array only is about one-third full, so assume that 4 TB (4000 GB) of actual data is stored. On the peak day, approximately 15 percent of the data is changed: 4000 GB x 0.15 = 600 GB During the day, most of the activity occurs between 9:00 a.m. and 7:00 p.m. (ten hours), so the average amount of data written per hour during that period is approximately 600 GB / 10 hours = 60 GB/hour The amount of increase over the average for the peak hour of the day is not known, so a peak-to-average ratio of 2.5 is used: 60 GB/hour x 2.5 = 150 GB/hour
134
10-5: Estimating the amount of traffic on the network links
Convert the peak hour data estimate from gigabytes per hour to megabytes per second. Add approximately 10 percent for network protocol framing overhead. Convert from megabytes (MB) to megabits (Mb). There are about million (220) bytes in a megabyte of data and there are eight bits in a byte of data, so the conversion factor is 8.39.
135
10-6: Numerical example of a network link traffic estimate
The amount of data written to the primary storage array during the peak hour is 150 GB. That can be converted to megabytes per second (MBps) as follows: 150 GB/hour x 1000 MB/GB = 150,000 MB/hour 150,000 MB/hour x 1 hour/3600 seconds = 41.6 MBps Including the additional framing overhead, the amount of WAN traffic generated per second during the peak hour is 41.6 MBps x 1.1 = 45.8 MBps In bits per second (bps), the estimated WAN bandwidth required for this application is 45.8 MBps x 8.39 Mb/MB = 384 Mbps
136
10-7: Estimating the propagation and node delays
Determine the actual route distance between the endpoints of the network link. The service provider usually can furnish that information. Assume a propagation delay of one millisecond per hundred miles (round trip). Determine the number of switches and routers in the data path. The service provider also can furnish that information or at least estimate it. Assume a relatively conservative estimate of two milliseconds per switch or router. Be sure to include the number of nodes in both directions for the round trip delay.
137
10-8: Estimating the congestion queuing delay
Determine the current average utilization of the network link without the storage application (0−100 percent). Subtract that value from 100 percent to derive the proportion of bandwidth that is available for the new storage networking application. Divide the total node delay by the proportion of bandwidth available for the new storage networking application. For example, if all of the bandwidth is available for storage networking, the divisor will be 1.0, and there will be no additional congestion queuing. On the other hand, if only half of the bandwidth is available, the divisor will be 0.5, which would predict that the node delay would be doubled by the congestion queuing.
138
10-9: Numerical example of total network latency
The distance between the endpoints of the network link, which connects the primary and secondary data centers, is 200 miles. Therefore, the round-trip propagation delay is: 200 miles x 2 = 400 miles 400 miles x 1 ms/100 miles = 4 ms There are four routers in the path taken by the data, so the estimated round trip node delay is: 4 nodes x 2 x (2 ms/node) = 16 ms The current average network link utilization without the storage application is 15 percent, so the amount of bandwidth available for the new storage networking application is: 100% – 15% = 85% With congestion processing delay, the round trip node delay is increased to: 16 ms / 0.85 = 19 ms (in this case, the congestion delay adds 3 ms) The total network latency (propagation delay + node delay + congestion delay) is: 4 ms + 16 ms + 3 ms = 23 ms
139
Managing the Storage Domain
Chapter 11 Managing the Storage Domain
140
11-4: Defining simplicity
Current Environment Context Overload Required Environment Managed Context Storage Infrastructure Storage Infrastructure Management $ per Gigabyte $ per Gigabyte Management Time Time
141
11-5: Practice disciplines for managing the storage domain
Source: The Data Mobility Group
142
Source: The Data Mobility Group
11-6: Business management practice objectives and implementation examples Business Management Objective Management of storage domain as business entity Implementation examples Service Level Agreements (SLAs) Tracking and usage charge-back Quality of Service (QoS) Hardware and software decision making Source: The Data Mobility Group
143
11-7: Risk management practice objectives and implementation examples
Risk Management Objective Use of the storage domain to mitigate corporate risk Implementation examples Disaster recovery and business continuity Backup and restore Security Source: The Data Mobility Group
144
11-8: Data management practice objectives and implementation examples
Data Management Objective Effective presentation of data within the storage domain using storage-based file systems and copy functions Implementation examples Storage-based file systems (e.g., NAS) Copy functions for content distribution, performance optimization Network storage design and architecture Source: The Data Mobility Group
145
11-9: Implementing Quality of Service Policies
Partitioned Intelligent Storage Subsystem Host Systems OLTP Server OLTP Server QoS Mechanisms: IEEE 802.1p/Q, IETF DiffServ, IETF MPLS, IETF RSVP A A Customer Web Catalog Server Customer Web Catalog Server B B Storage Network Corporate Server Corporate Server C C Corporate File Server Corporate File Server D Traffic Prioritization Engine D First In D Corporate File Server OLTP Server A High Priority B Customer Web Catalog Customer Web Catalog B Incoming Traffic Outgoing Priority C Corporate Server Corporate Server C Last In A OLTP Server Corporate File Server D Low Priority
146
11-10: Storage Network Open Management Checklist
SAN / DAS NAS Vendor A Vendor B Vendor C Discover Topology Logical relationships Monitor/ Report Array, SAN, NAS, and host health Array performance SAN performance Host performance Database performance Configuration/utilization Provision Filesystems and volume managers Zoning LUN masking Storage devices Automate Alert resolution Provisioning Replication Source: EMC Corporation
147
Perfecting the Storage Engine
Chapter 12 Perfecting the Storage Engine
148
12-1: From Defensive to Offensive Strategies
Defensive Strategies Offensive Strategies Medium- to Long-Term Planning From Context to Core
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.