TEMPLATE DESIGN © 2008 www.PosterPresentations.com High Performance Molecular Dynamics in Cloud Infrastructure with SR-IOV and GPUDirect Andrew J. Younge.

Slides:



Advertisements
Similar presentations
The Datacenter Needs an Operating System Matei Zaharia, Benjamin Hindman, Andy Konwinski, Ali Ghodsi, Anthony Joseph, Randy Katz, Scott Shenker, Ion Stoica.
Advertisements

OpenStack for VMware administrators in the context of a fictional use case Bridging the Gap.
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
Advanced Virtualization Techniques for High Performance Cloud Cyberinfrastructure Andrew J. Younge Ph.D. Candidate Indiana University Advisor: Geoffrey.
Performance Analysis of Virtualization for High Performance Computing A Practical Evaluation of Hypervisor Overheads Matthew Cawood University of Cape.
Clouds from FutureGrid’s Perspective April Geoffrey Fox Director, Digital Science Center, Pervasive.
GPUs on Clouds Andrew J. Younge Indiana University (USC / Information Sciences Institute) UNCLASSIFIED: 08/03/2012.
Analysis of Virtualization Technologies for High Performance Computing Environments Andrew J. Younge, Robert Henschel, James T. Brown, Gregor von Laszewski,
Authors: Thilina Gunarathne, Tak-Lon Wu, Judy Qiu, Geoffrey Fox Publish: HPDC'10, June 20–25, 2010, Chicago, Illinois, USA ACM Speaker: Jia Bao Lin.
Evaluating GPU Passthrough in Xen for High Performance Cloud Computing Andrew J. Younge 1, John Paul Walters 2, Stephen P. Crago 2, and Geoffrey C. Fox.
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid Geoffrey Fox, Andrew J. Younge, Gregor von Laszewski, Archit Kulshrestha, Fugang.
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
New Direction Proposal: An OpenFabrics Framework for high-performance I/O apps OFA TAC, Key drivers: Sean Hefty, Paul Grun.
Introduction to Cloud Computing Zsolt Németh MTA SZTAKI.
Big Data and Clouds: Challenges and Opportunities NIST January Geoffrey Fox
Eucalyptus on FutureGrid: A case for Eucalyptus 3 Sharif Islam, Javier Diaz, Geoffrey Fox Gregor von Laszewski Indiana University.
1 1 Hybrid Cloud Solutions (Private with Public Burst) Accelerate and Orchestrate Enterprise Applications.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
LARGE SCALE DEPLOYMENT OF DAP AND DTS Rob Kooper Jay Alemeda Volodymyr Kindratenko.
Experimenting with FutureGrid CloudCom 2010 Conference Indianapolis December Geoffrey Fox
Science Clouds and FutureGrid’s Perspective June Science Clouds Workshop HPDC 2012 Delft Geoffrey Fox
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 2.
Andrew J. Younge Golisano College of Computing and Information Sciences Rochester Institute of Technology 102 Lomb Memorial Drive Rochester, New York
Improving Network I/O Virtualization for Cloud Computing.
Extreme scale parallel and distributed systems – High performance computing systems Current No. 1 supercomputer Tianhe-2 at petaflops Pushing toward.
Above the Clouds : A Berkeley View of Cloud Computing
Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain.
Gilad Shainer, VP of Marketing Dec 2013 Interconnect Your Future.
Scientific Computing Environments ( Distributed Computing in an Exascale era) August Geoffrey Fox
High Performance Computing on Virtualized Environments Ganesh Thiagarajan Fall 2014 Instructor: Yuzhe(Richard) Tang Syracuse University.
FutureGrid Connection to Comet Testbed and On Ramp as a Service Geoffrey Fox Indiana University Infra structure.
IM&T Vacation Program Benjamin Meyer Virtualisation and Hyper-Threading in Scientific Computing.
© 2012 MELLANOX TECHNOLOGIES 1 Disruptive Technologies in HPC Interconnect HPC User Forum April 16, 2012.
Computing Research Testbeds as a Service: Supporting large scale Experiments and Testing SC12 Birds of a Feather November.
Full and Para Virtualization
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
Memcached Integration with Twister Saliya Ekanayake - Jerome Mitchell - Yiming Sun -
1 Cloud Systems Panel at HPDC Boston June Geoffrey Fox Community Grids Laboratory, School of informatics Indiana University
Directions in eScience Interoperability and Science Clouds June Interoperability in Action – Standards Implementation.
Grappling Cloud Infrastructure Services with a Generic Image Repository Javier Diaz Andrew J. Younge, Gregor von Laszewski, Fugang.
Cloudsim: simulator for cloud computing infrastructure and modeling Presented By: SHILPA V PIUS 1.
Virtual Machine in HPC PAK MARKTHUB (13M54040) 1 VIRTUAL MACHINE IN HPC.
IMPROVEMENT OF COMPUTATIONAL ABILITIES IN COMPUTING ENVIRONMENTS WITH VIRTUALIZATION TECHNOLOGIES Abstract We illustrates the ways to improve abilities.
A Practical Evaluation of Hypervisor Overheads Matthew Cawood Supervised by: Dr. Simon Winberg University of Cape Town Performance Analysis of Virtualization.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Prof. Jong-Moon Chung’s Lecture Notes at Yonsei University
Parallel Programming Models
Hyungro Lee, Geoffrey C. Fox
Digital Science Center
Organizations Are Embracing New Opportunities
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Digital Science Center II
PhD Dissertation Defense
Status and Challenges: January 2017
Daniel Murphy-Olson Ryan Aydelott1
Recap: introduction to e-science
Digital Science Center I
Architectural Principles and Experimentation of Distributed High Performance Virtual Clusters Andrew J. Younge Ph.D Candidate Indiana University
Versatile HPC: Comet Virtual Clusters for the Long Tail of Science SC17 Denver Colorado Comet Virtualization Team: Trevor Cooper, Dmitry Mishin, Christopher.
Development of the Nanoconfinement Science Gateway
Tutorial Overview February 2017
Sky Computing on FutureGrid and Grid’5000
Scalable Parallel Interoperable Data Analytics Library
Clouds from FutureGrid’s Perspective
Digital Science Center III
Cloud Computing: Concepts
Sky Computing on FutureGrid and Grid’5000
Can (HPC)Clouds supersede traditional High Performance Computing?
Convergence of Big Data and Extreme Computing
Presentation transcript:

TEMPLATE DESIGN © High Performance Molecular Dynamics in Cloud Infrastructure with SR-IOV and GPUDirect Andrew J. Younge *, John Paul Walters +, Geoffrey C. Fox* Introduction BenchmarksResults Conclusion References At present we stand at the inevitable intersection between High Performance Computing (HPC) and Clouds. Various platform tools such as Hadoop and MapReduce, among others have already percolated into data intensive computing within HPC [1]. Alternatively, there are efforts to support traditional HPC-centric scientific computing applications in virtualized Cloud infrastructure. The reasons for supporting parallel computation on Cloud infrastructure is bounded only by the advantages of Cloud computing itself [2]. For users, this includes features such as dynamic scalability, specialized operating environments, simple management interfaces, fault tollarance, and enhanced quality of service, to name a few. The growing importance of supporting advanced scientific computing using cloud infrastructure can be seen by a variety of new efforts, including the NSF-funded XSEDE Comet resource at SDSC [3]. Reluctantly, there exists a past notion that virtualization used in today’s Cloud infrastructure is inherently inefficient. Historically, Cloud infrastructure has also done little to provide the necessary advanced hardware capabilities that have become almost mandatory in Supercomputers today, most notably advanced GPUs and high-speed, low-latency interconnects. The result of these notions has hindered the use of virtualized environments for parallel computation, where performance must be paramount. Recent advances in hypervisor performance [4] coupled with the newfound availably of HPC hardware in virtual machines analogous to the most powerful supercomputers used today, we see can see the formation of a High Performance Cloud infrastructure. While our previous advanced in this are have focused on single-node advancements, it is now imparative to ensure real-world applications can also operate at scale. Furthermore, the tight and exact integration into an open source Cloud infrastructure framework such as OpenStack also becomes a critical next step. [1] S. Jha, J. Qiu, A. Luckow, P. K. Mantha, and G. C. Fox, “A tale of two data-intensive paradigms: Applications, abstractions, and architectures,” in Proceedings of the 3rd International Congress on Big Data (BigData 2014), [2] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “A view of cloud computing,” Commun. ACM, vol. 53, no. 4, pp. 50–58, Apr [3] M. Norman and R. Moore, “NSF awards 12 million to sdsc to deploy comet supercomputer,” Web page, [4] A. J. Younge, R. Henschel, J. T. Brown, G. von Laszewski, J. Qiu, and G. C. Fox, “Analysis of Virtualization Technologies for High Performance Computing Environments,” in Proceedings of the 4th International Conference on Cloud Computing (CLOUD 2011). Washington, DC: IEEE, July [5] J. P. Walters, A. J. Younge, D.-I. Kang, K.-T. Yao, M. Kang, S. P. Crago, and G. C. Fox, “GPU-Passthrough Performance: A Comparison of KVM, Xen, VMWare ESXi, and LXC for CUDA and OpenCL Applications,” in Proceedings of the 7th IEEE International Conference on Cloud Computing (CLOUD 2014), IEEE. Anchorage, AK: IEEE, [6] J. Jose, M. Li, X. Lu, K. C. Kandalla, M. D. Arnold, and D. K. Panda, “SR-IOV support for virtualization on infiniband clusters: Early experience,” in Cluster Computing and the Grid, IEEE International Symposium on, 2013, pp. 385–392. [7] M. Musleh, V. Pai, J. P. Walters, A. J. Younge, and S. P. Crago, “Bridging the Virtualization Performance Gap for HPC using SR-IOV for InfiniBand,” in Proceedings of the 7th IEEE International Conference on Cloud Computing (CLOUD 2014), IEEE. Anchorage, AK: IEEE, [8] W. M. Brown, “GPU acceleration in LAMMPS [9] J. Anderson, A. Keys, C. Phillips, T. Dac Nguyen, and S. Glotzer, “HOOMD-Blue, general-purpose many-body dynamics on the GPU,” in APS Meeting Abstracts, vol. 1, 2010, p * School of Informatics & Computing, Indiana University 901 E. 10th St., Bloomington, IN U.S.A. + Information Sciences Institute, University of Southern California 3811 North Fairfax Drive, Suite 200, Arlington, VA U.S.A. LAMMPS HOOMD-Blue Focus Areas Hypervisor Performance Virtualization can operate with near-native performance. IO Virtualization Leverage VT-d/IOMMU extensions to pass PCI-based hardware directly to a guest VM. GPUs and Accelerators Utilize PCI Passthrough in to provide GPUs to VMs Many hypervisors now able to support GPU Passthrough – we use KVM for best performance. High Speed Interconnects Using SR-IOV, we can create multiple VFs from a single PCI devicee, each assigned directly to a VM. Use Mellanox ConnectX3 VPI InfiniBand. QDR/FDR InfiniBand now possible within Cloud IaaS! OpenStack Integration Integrate virtualization advances to the OpenStack Cloud IaaS. Prototype available, some features available today Historically running advanced scientific applications in a virtualized infrastructure has been limited by both performance and advanced hardware availability Recent advancements allow for the use of both GPUs and InfiniBand fabric to be leveraged directly in VMs LAMMPS and HOOMD represent Molecular Dynamics tools commonly used on the most powerful supercomputers Virtualized performance for both applications at near-native 96.7% and 99.3% efficiency for LAMMPS LJ 2048k and RHODO 512k simulations 98.5% efficiency for HOOMD LJ 256k simulation Support for new GPUDirect RDMA features in virtualized system Large-scale virtualized Cloud Infrastructure can now support many of the same advanced scientific computations that are commonly found running on today’s supercomputers.