Presentation is loading. Please wait.

Presentation is loading. Please wait.

NDPSB Secondary Data Center Rack Design and Layout.

Similar presentations


Presentation on theme: "NDPSB Secondary Data Center Rack Design and Layout."— Presentation transcript:

1 NDPSB Secondary Data Center Rack Design and Layout

2 Server Infrastructure Upgrade 3 rd generation Dell Blade servers with redundant DRAC units (for remote management) Avocent technology to leverage out-of-band and remote management of servers and network equipment 16 physical server capacity in the space of 10U VM Ware infrastructure to support development environment and maintain constant state of DR proof and testing Unified storage for current VM Ware and data storage infrastructure

3

4

5 Storage Infrastructure Upgrade Increased capacity by 10.5 TB RAW (NDPSB) Increased capacity at main hall through relocation of data to NDPSB (est. 500 GB) Unified and scaleable solution for both data centers Ability to expand future services and maintain service levels through additional NetApp technologies Easy data-mirroring that does not require 3 rd party tools or utilities

6

7 NDPSB Rack Layout Compact rack layout allowing for integrated cable management Storing all server, management, network, and phone equipment in a single rack unit Allows for future expansion of backend storage capacity Allows for future network expansion up to 10 Gbit between standard equipment, 20 Gbit between the blade array Allows for future IP Phone equipment and patch panel integration Lights-out management capabilities

8 Design: Compact server layout Blades allow for up to 16 physical servers at this location Capability to restore tape backups during a DR situation Services: Active Directory Domain Services, DNS, and DHCP with a third domain controller ESX integrated with storage to allow for quick deployment of services during a DR situation With network upgrade, redundant link to the Internet including FortiGate firewall and Barracuda.

9 Network Design: Includes both User network and iSCSI (Dev) Network Blade array to use CX4 10GbE links to both reduce cabling and allow for high network speeds (10GbE expandable to 20GbE per link) Integrated patch paneling so all equipment can be stored in the single rack unit Future expansion (after network upgrade project) capabilities for redundant Internet connection and Active Directory connectivity

10 Capabilities: Management equipment in place to allow full remote administration Digital KVM-over-IP to simplify cabling and allow for KVM access over the web internally or externally via Citrix MergePoint allowing direct access to server processors and DRAC units Cyclades ACS8 for direct serial-over-IP to network equipment and storage array Integration allows for out-of-band management

11 Out-of-Band: Modem integrated into Cyclades ACS8 allow for direct dial-in access Integrated Linux based firewalling and 2 factor authentication capabilities Only used in the case a network switch or backbone is offline Regular Management: Access via DSView hub (at Hall) or DSView spoke (at NDPSB) Full remote KVM, Server Processor, and Serial administration capabilities

12 Cabling: Cat6 cabling between all equipment except Dell M1000E (Blade Array) 10GbE CX4 cabling between Dell M1000E (Blade Array) and HP 5400 switch backbone Redundant network connectivity Integrated patch panel for user network at NDPSB and future IP Phone upgrade Unified Ethernet cabling for all systems and management equipment with distinctive color coding

13 UPS Backup: Liebert Nfinity UPS configured for 12kVA/8.4kW power output UPS scalable to 20kVA/14kW Rack Power: 208/240 volt power to all equipment Fully redundant power supplies where available Initial power requirements to be lower than has been planned for allowing for future equipment growth and UPS expansion Power considerations for Police equipment and phone system taken into account

14 NDPSB Secondary Data Center Systems Disaster Recovery Design

15 Types of Disasters Major EOC Disasters: Major Fire (ie. Burns Bog) Flood Major Accident (ie. Plane crash) Regional crisis (ie. Earthquake, Biological) Major IS Disasters: Hall Internet connectivity loss Power or major server failure North South network connectivity loss Server data corruption or loss

16 Business SystemBusiness Priority BESH CitrixL ClassH Delta MapM EmailH MaximoM NetAppH PeopleSoft FinancialH PeopleSoft HRH SharepointL Scada{TBD} TempestM/H VMWareH Business Systems Priority Detailed Priority Scale Business Continuity Change Management Data Corruption Hardware Infrastructure Security Software

17 Present vs. Future Present: Recovery Point of minimum 24 hours if primary data center lost – Requires access to FireHall 1 Recovery Time of minimum estimated 42 days if primary data center lost – Equipment must be purchased and ordered (30 days) – Equipment setup for backup recovery (5 days) – Start of base system recovery and setup (7 days) Would result in the following systems being offline for a minimum of 42 days – PeopleSoft Financial – PeopleSoft HRMS Payroll – Active Directory network – Exchange (Mail) system and BES (Blackberry) access – DeltaMap – Tempest and Maximo – The list goes on...

18 Present vs. Future Future: Recovery Point of minimum 1 hour if primary data center lost – Does not require Firehall 1 access – Capability of future expansion to real-time recovery (0 hour RPO) Recovery Time of key systems estimated at 1 day if primary data center lost – No equipment purchase required – No tape backup recovery required – Backup systems already operational so minimal base system recovery and setup Would allow for the following critical systems to be back online and functional within the 1 day RTO – PeopleSoft Financial – PeopleSoft HRMS Payroll – Active Directory network – Exchange (Mail) system and BES (Blackberry) access – DeltaMap – Tempest and Maximo Capability to bring online non-critical systems in Virtualized DR environment for long-term business continuity

19 Key System Restoration Critical services identified as being Active Directory (Domain, DNS, DHCP), DeltaMap, PeopleSoft HR/Finance, Exchange (E-mail & BES services), and FDM. DR Recovery in 4 phases Phase 1 – This phase is active (live switch over) and includes third Domain Controller, continuance of Exchange secondary server, and activation of secondary Internet link to maintain WAN connectivity, activation of secondary GEO server and DeltaMap systems Phase 2 – Attachment of mirrored production databases on development environment to restore PeopleSoft, Maximo, and Tempest Phase 3 – Attachment of SharePoint production to development environment and restoration of network drive services Phase 4 – All other services restored including any tape restorations

20 Exchange 2007 Design Primary Exchange at Municipal Hall in CCR configuration Secondary Exchange at NDPSB in SCR configuration Datastore design with DR priorities set (EOC, Directors, Management, and IS as priority) Log shipping and data mirroring resulting in low Recovery Point (RPO) SCR cluster configuration resulting in low Recovery Time (RTO) Redundancy of BES services for Blackberries Leverage of future secondary Internet connection

21 Projected Exchange 2007 Design for DR coverage

22 Data Replication for key system recovery during DR situation

23 Data Replication for key system recovery during DR situation 2


Download ppt "NDPSB Secondary Data Center Rack Design and Layout."

Similar presentations


Ads by Google