Presentation is loading. Please wait.

Presentation is loading. Please wait.

Return of the Large Data Center. Computing Trends Computing power is now cheap, power hungry, and hot. Supercomputers are within reach of all R1 universities.

Similar presentations


Presentation on theme: "Return of the Large Data Center. Computing Trends Computing power is now cheap, power hungry, and hot. Supercomputers are within reach of all R1 universities."— Presentation transcript:

1 Return of the Large Data Center

2 Computing Trends Computing power is now cheap, power hungry, and hot. Supercomputers are within reach of all R1 universities Data centers built in the mainframe age cannot handle new heat loads, power distribution requirements or floor loading.

3 History: Centralization is Returning The 1970s mainframes required special cooling. In the late 1970s, air- cooled computers moved into offices – data centers were deemed “dead”. In the late 1990s, network-based applications and maintenance staff cost prompted savings through aggregation. Today, necessary investments in power and cooling make large centers attractive. Forces working to support or undermine the large data center

4 Service Profiles Level of sophistication and cost containment will vary with the service delivered. Critical and Research services may require high- density, dedicated hardware. All Enterprise, School and some Critical services should be virtual. On-demand load-sensitive processor allocation should be limited to core services (Critical or Enterprise) to control costs.

5 Each data center should accommodate multiple service levels to match cost to needs Inter-center load balancing should be limited to critical systems Redundant power and cooling should be present to survive loss of one system Environmental Multi-Tiering

6 Logical Architecture Each data center should accommodate critical services, enterprise applications, school applications and research resources. On-demand processor pooling to be limited to enterprise and critical applications. Enterprise and school applications should run on virtual servers. Critical services should do so as frequently as possible. Use DMZ structure with internal data center backbone to maximize performance and security

7 Addressing Heat Issues StrategyApproachProsCons Increase cooling capacityImprove AC airflowEnsure maximum value of current AC Cost to rearrange racks; yields limited improvements; cost for plenum flooring Add additional AC equipmentStopgap that can postpone larger investment Limited solution; requires more space and power itself; cost Add water cooling systemsSmaller footprint than AC; more efficient than AC Cost to build; cost to retrofit server racks; maintenance cost Reduce number of heat sourcesCombine individual servers into virtual instances on a single server Reduce number of processors; free space for better AC cooling Cost; without proper engineering, may aggregate too much risk Combine storage requirements into pool of storage More effective use of disk storage devices; reduce number of disk storage systems Aggregates risk; incremental cost to expand storage system Implement “On Demand” processor allocation Reduce number of processorsCost of management software and processor migration; reduces platform options; could increase processor density Reduce density of heat sourcesIncrease floor space in data center Single site is easier to manageCost of new space, including AC, power and increased staffing Divide hardware between data centers Enhanced business continuity; cost avoidance in existing data center

8 Addressing Staff Issues StrategyApproachProsCons Reduce complexityStandardize on hardware architecture Limited options reduces support effort and focuses staff knowledge Eliminates some alternative software products reliant upon specific environments Focus delivery of operating system or application server environment Increase automationAutomate software patch management Focus staff time on improving service availability; speed new server provisioning Cost Implement “On Demand” processor allocation Cost of management software and processor migration; reduces platform options; could increase processor density Enable business-hours maintenance Divide hardware for critical services between data centers Reduce staff stress, provide better service, and deliver 7x24 availability Cost Combine individual servers into virtual instances on a single server Cost; without proper engineering, may aggregate too much risk

9 Addressing Hardware Cost Issues StrategyApproachProsCons Use commodity productsStandardize on hardware architecture Uniform hardware maximizes space utilization and minimizes incremental expansion costs One-vendor commitment may be risky; initial cycle of hardware change could be costly Virtualize processingCombine individual servers into virtual instances on a single server Reduce hardware costsCost; without proper engineering, may aggregate too much risk Implement “On Demand” processor allocation Re-use hardware across applications which have different peak demand periods Cost of hardware pool for allocation between applications; cost of software environment

10 Common Approaches ApproachImplement “On Demand” processor allocation Combine individual servers into virtual instances on a single server Divide hardware between data centers Standardize on hardware architecture ProsHardware cost: Re-use hardware across applications which have different peak demand periods Hardware cost: Reduce hardware costs Hardware cost: Uniform hardware maximizes space utilization and minimizes incremental expansion costs Heat Load: Reduce number of processors Heat Load: Reduce number of processors; free space for better AC cooling Heat Load: Reduced heat load avoids cost in existing data center Staffing: Focus staff time on improving service availability; speed new server provisioning Staffing: Permit business-hours maintenance and deliver 7x24 availability Staffing: Permit business-hours maintenance and deliver 7x24 availability Staffing: Limited options reduces support effort and focuses staff knowledge ConsCost of management software and processor migration; reduces platform options; could increase processor density Cost; without proper engineering, may aggregate too much risk Cost of new space, including AC, power and increased staffing Eliminates some alternative software products reliant upon specific environments. One- vendor commitment may be risky; initial cycle of hardware change could be costly.


Download ppt "Return of the Large Data Center. Computing Trends Computing power is now cheap, power hungry, and hot. Supercomputers are within reach of all R1 universities."

Similar presentations


Ads by Google