Presentation is loading. Please wait.

Presentation is loading. Please wait.

Tackling I/O Issues www.openfabrics.org 1 David Race 16 March 2010.

Similar presentations


Presentation on theme: "Tackling I/O Issues www.openfabrics.org 1 David Race 16 March 2010."— Presentation transcript:

1 Tackling I/O Issues www.openfabrics.org 1 David Race 16 March 2010

2 Agenda The Challenge Today Customer Needs Solving Customer Needs The Appro Approach The Benefits Solution Summary 2 www.openfabrics.org

3 The issue today … www.openfabrics.org 3 Supercomputers based on cluster architecture are composed of highly scalable compute nodes, fast interconnects, operating system, several programming and software tools with massive storage for very large scientific data processing and visualization. Moore’s Law in processors is not translating to storage speed. There is enough volume, but not speed. The cluster processors are in a wait state while this data is written to disk. This results in as much as 20% loss of productivity that could be used by an application. I/O bottleneck!

4 The Reason www.openfabrics.org 4 Reduced utilization of compute cycles of the processors impacts application and database reliability and performance Data Performance Gap Registers (1 cycle) Cache (10 cycles) Memory (100 cycles) Storage (10,000 cycles) Performance Gap Current solutions don’t leverage Moore’s Law to provide ongoing bandwidth improvements 

5 User Needs www.openfabrics.org 5 Speed and accuracy in data usage are critical Well recognized and acute I/O issues in many application areas Under budget constraints and grant limitations Has physical space and power limitations Strong supporter of standards Huge appetite for processing power Uses large data sets for modeling and simulation “Time to results” is critical Under data center limitations where use of industry-accepted hardware, software, and interface protocol standards are required Users require massive compute and data-handling capabilities to conduct their seismic and data modeling analysis.

6 The Appro Approach www.openfabrics.org 6 The goal: Extract the maximum amount of available compute processing by reducing or eliminating the I/O bottleneck to significantly boost HPC end-user system application performance. Offer a standards-based integrated HPC architecture optimized for performance, reliability, and scalability that is non disruptive to the HPC end-user. Next Generation Solution: Appro’s next generation supercomputer solution combines a robust Storage File System and an I/O Engine software technology layer that provides an innovative way to solve the I/O bottleneck and deliver sustained application performance.

7 Solving Needs Computation improvements through superior I/O –Improve File Server Usage –I/O Wait-time reduction –I/O Hiding –Improved Scalability and Reliability Application programming improvements –Huge Shared Memory that spans in multiple I/O channels for 100+TB of reliable cache –Reliable Memory Paradigms –Scalable ISV environments Benefits –Size (Terabyte) and speed (GB/sec) is sized based on cache and storage –100% data reliability –No client modifications www.openfabrics.org 7

8 The Appro Advantage www.openfabrics.org 8 Leverages Moore’s Law for I/O Bandwidth Balances memory and SSD for peak performance Uses rotating disks for capacity Shares data across I/O Channels Reduces dependence on backend bandwidth for peak performance Makes the data available to multiple clients without data replication Reliability No single point of failure Data includes error correction for inexpensive cache and backend storage Usability Clients can use the I/O channel without modification

9 The Benefits www.openfabrics.org 9 Deliver a solution that provides dramatic computation and application programming improvements Computational improvements through superior I/O Application programming improvements Compute server usage Scalability Reliability Green I/O wait time reduction Huge Shared Cache Reliable Memory Paradigms Scalable ISV environments

10 Solution Summary www.openfabrics.org 10 Extreme Performance Dramatic performance improvements in High Performance Computing modeling and simulation applications Performance scales linearly with increase in data volumes Standards-based NFS frontend, works equally well with Linux, Unix and Windows Works transparently with existing applications with no change in application management Cost Effective Backend file system is scaled to average performance Cache manages the peak load Value Takes advantage of the massive availability and scalability of non- proprietary hardware Delivers higher performance than similarly configured systems providing budget relief, increased performance per dollar, as well as energy consumption and footprint


Download ppt "Tackling I/O Issues www.openfabrics.org 1 David Race 16 March 2010."

Similar presentations


Ads by Google