Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Scalable FPGA-based Multiprocessor for Molecular Dynamics Simulation Arun Patel 1, Christopher A. Madill 2,3, Manuel Saldaña 1, Christopher Comis 1,

Similar presentations


Presentation on theme: "A Scalable FPGA-based Multiprocessor for Molecular Dynamics Simulation Arun Patel 1, Christopher A. Madill 2,3, Manuel Saldaña 1, Christopher Comis 1,"— Presentation transcript:

1 A Scalable FPGA-based Multiprocessor for Molecular Dynamics Simulation Arun Patel 1, Christopher A. Madill 2,3, Manuel Saldaña 1, Christopher Comis 1, Régis Pomès 2,3, Paul Chow 1 Presented By: Arun Patel apatel@eecg.toronto.edu Connections 2006 The University of Toronto ECE Graduate Symposium Toronto, Ontario, Canada June 9 th, 2006 1: Department of Electrical and Computer Engineering, University of Toronto 2: Department of Structural Biology and Biochemistry, The Hospital for Sick Children 3: Department of Biochemistry, University of Toronto

2 06/09/2006Connections 20062 Introduction –FPGAs can accelerate many computing tasks by up to 2 or 3 orders of magnitude –Supercomputers and computing clusters have been designed to improve computing performance –Our work focuses on developing a computing cluster based on a scalable network of FPGAs –Initial design will be tailored for performing Molecular Dynamics simulations

3 06/09/2006Connections 20063 Molecular Dynamics – Combines empirical force calculations with Newton’s equations of motion – Predicts the time trajectory of small atomic systems – Computationally demanding   F  1.Calculate interatomic forces 2.Calculate the net force 3.Integrate Newtonian equations of motion

4 06/09/2006Connections 20064 Molecular Dynamics – Combines empirical force calculations with Newton’s equations of motion – Predicts the time trajectory of small atomic systems – Computationally demanding   F  1.Calculate interatomic forces 2.Calculate the net force 3.Integrate Newtonian equations of motion

5 06/09/2006Connections 20065 Molecular Dynamics – Combines empirical force calculations with Newton’s equations of motion – Predicts the time trajectory of small atomic systems – Computationally demanding   F  1.Calculate interatomic forces 2.Calculate the net force 3.Integrate Newtonian equations of motion

6 06/09/2006Connections 20066 Molecular Dynamics – Combines empirical force calculations with Newton’s equations of motion – Predicts the time trajectory of small atomic systems – Computationally demanding   F  1.Calculate interatomic forces 2.Calculate the net force 3.Integrate Newtonian equations of motion

7 06/09/2006Connections 20067 Molecular Dynamics U =     + + + +

8 06/09/2006Connections 20068 Why Molecular Dynamics? 2. Computationally Demanding 30 CPU Years 1. Inherently Parallelizable

9 06/09/2006Connections 20069 Motivation for Architecture Majority of hardware accelerators achieve ~10 2 -10 3 x improvement over S/W by –Pipelining a serially-executed algorithm - or - –Performing operations in parallel Such techniques do not address large- scale computing applications (such as MD) –Much greater speedups are required (10 4 -10 5 ) –Not likely with a single hardware accelerator Ideal solution for large-scale computing? –Scalability of modern HPC platforms –Performance of hardware acceleration

10 06/09/2006Connections 200610 The “TMD” Machine An investigation of a FPGA-based architecture –Designed for applications that exhibit high compute-to- communication ratio –Made possible by integration of microprocessors, high-speed communication interfaces into modern FPGA packages

11 06/09/2006Connections 200611 Inter-Task Communication Based on Message Passing Interface (MPI) –Popular message-passing standard for distributed applications –Implementations available for virtually every HPC platform TMD-MPI –Subset of MPI standard developed for TMD architecture –Software library for tasks implemented on embedded microprocessors –Hardware Message Passing Engine (MPE) for hardware computing tasks

12 06/09/2006Connections 200612 MD Software Implementation Atom Store r → r → Force Engine Atom Store r → F → Force Engine Atom Store r → F → F → F → mpiCC Interconnection Network Design Flow – Testing and validation – Parallel design – Software to hardware transition

13 06/09/2006Connections 200613 Current Work XC2VP100 PPC-405 Force Engine Atom Store Force Engine Atom Store + TMD-MPI + ppc-g++ Force Engine C++ → HDL + TMD-MPE + Synthesis Replace software processes with hardware computing engines

14 06/09/2006Connections 200614 Acknowledgements SOCRN David Chui Christopher Comis Sam Lee Dr. Paul Chow Andrew House Daniel Nunes Manuel Saldaña Emanuel Ramalho Dr. Régis Pomès Christopher Madill Arun Patel Lesley Shannon TMD Group: Past Members:

15 06/09/2006Connections 200615 Large-Scale Computing Solutions Class 1 Machines –Supercomputers or clusters of workstations –~10-10 5 interconnected CPUs Interconnection Network

16 06/09/2006Connections 200616 Large-Scale Computing Solutions Class 1 Machines –Supercomputers or clusters of workstations –~10-10 5 interconnected CPUs Class 2 Machines –Hybrid network of CPU and FPGA hardware –FPGA acts as external co-processor to CPU –Programming model still evolving Interconnection Network

17 06/09/2006Connections 200617 Large-Scale Computing Solutions Class 1 Machines –Supercomputers or clusters of workstations –~10-10 5 interconnected CPUs Class 2 Machines –Hybrid network of CPU and FPGA hardware –FPGA acts as external co-processor to CPU –Programming model still evolving Class 3 Machines –Network of FPGA-based computing nodes –Recent area of academic and industrial focus Interconnection Network

18 06/09/2006Connections 200618 TMD Communication Infrastructure Tier 1: Intra-FPGA Communication –Point-to-Point FIFOs are used as communication channels –Asynchronous FIFOs isolate clock domains –Application-specific network topologies can be defined

19 06/09/2006Connections 200619 TMD Communication Infrastructure Tier 1: Intra-FPGA Communication –Point-to-Point FIFOs are used as communication channels –Asynchronous FIFOs isolate clock domains –Application-specific network topologies can be defined Tier 2: Inter-FPGA Communication –Multi-gigabit serial transceivers used for inter-FPGA communication –Fully-interconnected network topology using 2N*(N-1) pairs of traces

20 06/09/2006Connections 200620 TMD Communication Infrastructure Tier 1: Intra-FPGA Communication –Point-to-Point FIFOs are used as communication channels –Asynchronous FIFOs isolate clock domains –Application-specific network topologies can be defined Tier 2: Inter-FPGA Communication –Multi-gigabit serial transceivers used for inter-FPGA communication –Fully-interconnected network topology using 2N*(N-1) pairs of traces Tier 3: Inter-Cluster Communication –Commercially-available switches interconnect cluster PCBs –Built-in features for large-scale computing: fault-tolerance, scalability

21 06/09/2006Connections 200621 TMD “Computing Tasks” (1/2) Computing Tasks –Applications are defined as collection of computing tasks –Tasks communicate by passing messages Task Implementation Flexibility –Software processes executing on embedded microprocessors –Dedicated hardware computing engines Task Computing Engine Embedded Microprocessor Processor on CPU Node Class 3Class 1

22 06/09/2006Connections 200622 TMD “Computing Tasks” (2/2) Computing Task Granularity –Tasks can vary in size and complexity –Not restricted to one task per FPGA FPGAsTasks A B C DEF GHI JKLM

23 06/09/2006Connections 200623 TMD-MPI Software Implementation Application Hardware MPI Application Interface Point-to-Point MPI Functions Send/Receive Implementation FSL Hardware Interface Layer 4: MPI Interface All MPI functions implemented in TMD-MPI that are available to the application.

24 06/09/2006Connections 200624 TMD-MPI Software Implementation Application Hardware MPI Application Interface Point-to-Point MPI Functions Send/Receive Implementation FSL Hardware Interface Layer 4: MPI Interface All MPI functions implemented in TMD-MPI that are available to the application. Layer 3: Collective Operations Barrier synchronization, data gathering and message broadcasts.

25 06/09/2006Connections 200625 TMD-MPI Software Implementation Application Hardware MPI Application Interface Point-to-Point MPI Functions Send/Receive Implementation FSL Hardware Interface Layer 4: MPI Interface All MPI functions implemented in TMD-MPI that are available to the application. Layer 3: Collective Operations Barrier synchronization, data gathering and message broadcasts. Layer 2: Communication Primitives MPI_Send and MPI_Recv methods are used to transmit data between processes.

26 06/09/2006Connections 200626 TMD-MPI Software Implementation Application Hardware MPI Application Interface Point-to-Point MPI Functions Send/Receive Implementation FSL Hardware Interface Layer 4: MPI Interface All MPI functions implemented in TMD-MPI that are available to the application. Layer 3: Collective Operations Barrier synchronization, data gathering and message broadcasts. Layer 2: Communication Primitives MPI_Send and MPI_Recv methods are used to transmit data between processes. Layer 1: Hardware Interface Low level methods to communicate with FSLs for both on and off-chip communication.

27 06/09/2006Connections 200627 TMD Application Design Flow Step 1: Application Prototyping –Software prototype of application developed –Profiling identifies compute-intensive routines Application Prototype

28 06/09/2006Connections 200628 TMD Application Design Flow Step 1: Application Prototyping –Software prototype of application developed –Profiling identifies compute-intensive routines Step 2: Application Refinement –Partitioning into tasks communicating using MPI –Each task emulates a computing engine –Communication patterns analyzed to determine network topology Application Prototype Process AProcess BProcess C

29 06/09/2006Connections 200629 TMD Application Design Flow Step 1: Application Prototyping –Software prototype of application developed –Profiling identifies compute-intensive routines Step 2: Application Refinement –Partitioning into tasks communicating using MPI –Each task emulates a computing engine –Communication patterns analyzed to determine network topology Step 3: TMD Prototyping –Tasks are ported to soft-processors on TMD –Software refined to utilize TMD-MPI library –On-chip communication network verified Application Prototype Process AProcess BProcess C ABC

30 06/09/2006Connections 200630 TMD Application Design Flow Step 1: Application Prototyping –Software prototype of application developed –Profiling identifies compute-intensive routines Step 2: Application Refinement –Partitioning into tasks communicating using MPI –Each task emulates a computing engine –Communication patterns analyzed to determine network topology Step 3: TMD Prototyping –Tasks are ported to soft-processors on TMD –Software refined to utilize TMD-MPI library –On-chip communication network verified Step 4: TMD Optimization –Intensive tasks replaced with hardware engines –MPE handles communication for hardware engines Application Prototype Process AProcess BProcess C ABC B

31 06/09/2006Connections 200631 Future Work – Phase 2 TMD Version 2 Prototype

32 06/09/2006Connections 200632 Future Work – Phase 3 The final TMD architecture will contain a hierarchical network of FPGA chips


Download ppt "A Scalable FPGA-based Multiprocessor for Molecular Dynamics Simulation Arun Patel 1, Christopher A. Madill 2,3, Manuel Saldaña 1, Christopher Comis 1,"

Similar presentations


Ads by Google