Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fast Path Data Center Relocation Presented by: AFCOM & Independence Blue Cross April 8, 2011.

Similar presentations


Presentation on theme: "Fast Path Data Center Relocation Presented by: AFCOM & Independence Blue Cross April 8, 2011."— Presentation transcript:

1 Fast Path Data Center Relocation Presented by: AFCOM & Independence Blue Cross April 8, 2011

2 Agenda Some Background to Understand Why Initial Planning (After Fear Subsides) Critical Path Components Dark Fiber DWDM Network Issue Identified Summary of Project Results Keys to Success Overview of DR Network Design

3 Background Events Completed Relocation of Primary Data Center in September 2009 Completed Implementation & Data Center Infrastructure Build-out of Insourced DR Capability in January 2010 at Secondary Site (Including Cancelation of Outsource Contract) In Q1 2010: Learned the Location Supporting the DR Data Center (580) Could be Phased Out in 2011 Completed Relocation of Entire Test, QA, Development Server Infrastructure to Primary Data Center in Q2 2010 Q2 2010: Started a Detail Data Center Site Alternative Study to Evaluate Options By July 2010, Two Viable Options Had Been Identified: – Build-out within an existing IBC real estate footprint – Build-out within the Directlink Technologies facility By August, 2010: Learned the 580 Phase Out Would Need to be a Fast Path with a Target Decommission in March 2011

4 Realization and Fear Realization: We Have Less than 7 Months to: – Select a Site and Either Complete Engineering Design or Negotiate a Deal – Build Out the Data Center – Re-route the Dark Fiber Network – Plan and Execute the Relocation of the DR Data Center – Relocate the Operations Command Center which was Also Located in the DR Facility Initial Reaction:

5 Feasibility Assessment – Fear 101 Document Key Factors – Long Lead Purchases – Contract Negotiations – Fiber Path Build-out – Data Center Build-out – New Command Center Build-outs (2) – Relocation Events Integrated into Master Release Calendar – Cycle Time for Purchase Order Approvals Obtain Executive Support – Agree to a Fast Path Approval Process – Agree to Master Calendar Adjustments Publish High Level Timeline Parallel Sub-Projects for Critical Path Activities

6 High-Level Timeline

7 Site Selected: Directlink Technologies 7 Key Decision Factors:  Facility infrastructure design that could serve as our production data center  Our dark fiber vendor was already in the facility, and could be added as a node on our ring with minimal physical build-out  Negotiated a deal based on our requirements, not a “one model fits all” approach  Enabled IBC to meet the aggressive timeline before contract officially signed (e.g. work through the Attorney dance time) Building 2

8 Critical Path – Parallel Activities 1.Facility build-out including space design and configuration changes: Installation of power distribution, cooling capacity, overhead cable tray, and NOC area 2.Fiber network build-out to support migration plan to transition to the new data center location without outage 3.Acquisition and implementation of additional optical equipment to support the new fiber network node 4.Acquisition and implementation of “seed” network infrastructure equipment to allow new data center to be an active node prior to relocation events 5.Installation of “seed” server cabinets and structured cabling within the new data center space 6.Relocation strategy development and event detail planning 7.Relocate the primary Operations Command Center to another location (1901 Market St.) prior to completing data center relocation

9 Fiber Network Issue Identified One Inter-Data Center Fiber Path Much Longer – Invalidated Previous Inter Data Center Fiber Channel Configuration – Required Conversion from Native FC to FC/IP for Storage Replication and LSAN Capabilities Resulting in (just what we needed) – Another Critical Path, Parallel Activity

10 Project Results Summary Command Center Reconfiguration – Primary Command Center Relocated from 580 to IBC Headquarters at 1901 Market Street – Active Secondary Command Center Built-Out in the Directlink Facility

11 Project Results Summary Data Center Preparation – Approximately two miles of fiber cable – Approximately one mile of copper cable (reduced from over 20 miles of copper installed at 580 site) – 1,200’ of ceiling mounted cable trays – 1,850 fiber terminations – 500 Ethernet connections – Temporary routings of backbone fiber to support the cutover to Reading – Installation of more than 40 new network devices supporting the fiber backbone and core network – Rerouting telecommunication circuits supporting redundant connections for the MPLS WAN and Extranet, Internet ISP, Bluesnet, IBC Sonet, ASP Services and Remote Access – Installation of 100 equipment cabinets (some new, some as emptied) – Custom configuration of dual power utility feeds to 2 separate 750 kVA UPS systems – Installation of four 300 kVA PDUs – Installation of new 30 ton CRAH units – Installation of 185+ electrical circuits to provide dual A/B power to all IT equipment

12 Data Center Gallery Before After

13 September 2010

14

15

16 October 2010

17

18 November 2010

19 December 2010

20 NOC September 2010

21 NOC November 2010

22

23 NOC December 2010

24 NOC February 2011

25 1901 NOC September 2010

26 1901 NOC February 2011

27 Project Results Summary Equipment Relocation – 111 physical servers that support redundant production, redundant QA and standby DR – 126 virtual servers supporting redundant production, redundant QA and standby DR – 39 Information Security appliances and network infrastructure devices all redundant to Bretz configurations – z10 Disaster Recovery Mainframe – Mainframe disk configurations hosting replication of data from primary site – Mainframe virtual tape system and back-end ATL – Affiliate AS/400 Disaster Recovery configuration including mirrored storage and tape subsystem – Informatics-Teradata configuration supporting QA-Test-Dev and DR for production environment – A 16-cabinet AS/400 configuration, including mirrored storage and ATL supporting QA-Test-Dev and DR for production environment – Distributed Systems SAN storage configuration supporting both active HA systems as well as production data replication – Distributed SAN Infrastructure and ATL – Fully engaged application test teams to verify system status post-move

28 Keys to Success Partner Type Relationship with Directlink – Handshake deal to start facility construction tasks – Willingness to work through configuration changes w/o red tape Confidence in Network Strategy – Strategy Successfully Used for Production DC Relocation – Experience with Target Design and Foundation Technology Experienced Team to Design & Build Data Center Layout and Supporting Cable Plant – Layout to Optimize Active Cable Plant and Facilitate Future Expansion – Diligence in Documenting. Labeling and Pre-Testing All Connections Leverage Inventory Configuration Management and Relocation Experience – Use of Existing Inventory Management Repository Containing Key Equipment Specifications Including Power and Connection Details – Development of Detailed Move Event Scripts and Supporting Staffing Plans Strong Project Management Oversight Managing All Critical Path Activities

29 Disaster Recovery Network Configuration Flexibility To Provide – The Ability to Safely Test DR Capabilities in an Isolated DR Network – Fast, Seamless Integration Into the Production Network in the Event of a Primary Site Failure

30

31

32

33


Download ppt "Fast Path Data Center Relocation Presented by: AFCOM & Independence Blue Cross April 8, 2011."

Similar presentations


Ads by Google