The Protein Folding Problem David van der Spoel Dept. of Cell & Mol. Biology Uppsala, Sweden

Slides:



Advertisements
Similar presentations
Communication-Avoiding Algorithms Jim Demmel EECS & Math Departments UC Berkeley.
Advertisements

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
An Introduction to Atomistic Spin Models T. Ostler Dept. of Physics, The University of York, York, United Kingdom. December 2014.
28 September 2011 RIS d.o.o Performance comparison &
Transfer FAS UAS SAINT-PETERSBURG STATE UNIVERSITY COMPUTATIONAL PHYSICS Introduction Physical basis Molecular dynamics Temperature and thermostat Numerical.
GPU Virtualization Support in Cloud System Ching-Chi Lin Institute of Information Science, Academia Sinica Department of Computer Science and Information.
Computer Cluster at UTFSM Yuri Ivanov, Jorge Valencia.
Molecular Simulation. Molecular Simluation Introduction: Introduction: Prerequisition: Prerequisition: A powerful computer, fast graphics card, A powerful.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Computing Resources Joachim Wagner Overview CNGL Cluster MT Group Cluster School Cluster Desktop PCs.
Computational Modelling of Chemical and Biochemical Reactivity Chemistry Ian Williams.
1 CS 501 Spring 2005 CS 501: Software Engineering Lecture 22 Performance of Computer Systems.
Molecular Dynamics Classical trajectories and exact solutions
Queensland Parallel Supercomputing Foundation 1. Professor Mark Ragan (Institute for Molecular Bioscience) 2. Dr Thomas Huber (Department of Mathematics)
Measuring Performance
Computer Systems Computer Performance.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
Prepared by Careene McCallum-Rodney Hardware specification of a computer system.
 The processor number is one of several factors, along with processor brand, specific system configurations and system-level benchmarks, to be.
COMPARISONS 64-bit Intel Xeon X Ghz processors –12 processors sharing 48 GB RAM –Each BARON run restricted to single processor All experiments.
Computer Design Corby Milliron. Mother Board specs Model: Processor Socket Intel Processor Interface LGA1150 Form Factor ATX Processors Supported 4th.
How Computers Work. A computer is a machine f or the storage and processing of information. Computers consist of hardware (what you can touch) and software.
Report : Zhen Ming Wu 2008 IEEE 9th Grid Computing Conference.
Ana Damjanovic (JHU, NIH) JHU: Petar Maksimovic Bertrand Garcia-Moreno NIH: Tim Miller Bernard Brooks OSG: Torre Wenaus and team.
Different CPUs CLICK THE SPINNING COMPUTER TO MOVE ON.
ChE 452 Lecture 24 Reactions As Collisions 1. According To Collision Theory 2 (Equation 7.10)
Energy Profiling And Analysis Of The HPC Challenge Benchmarks Scalable Performance Laboratory Department of Computer Science Virginia Tech Shuaiwen Song,
Javier Junquera Molecular dynamics in the microcanonical (NVE) ensemble: the Verlet algorithm.
Types of Computers Mainframe/Server Two Dual-Core Intel ® Xeon ® Processors 5140 Multi user access Large amount of RAM ( 48GB) and Backing Storage Desktop.
Introduction to Protein Folding and Molecular Simulation Background of protein folding Molecular Dynamics (MD) Brownian Dynamics (BD) September, 2006 Tokyo.
Molecular Dynamics Collection of [charged] atoms, with bonds – Newtonian mechanics – Relatively small #of atoms (100K – 10M) At each time-step – Calculate.
CZ5225 Methods in Computational Biology Lecture 4-5: Protein Structure and Structural Modeling Prof. Chen Yu Zong Tel:
£899 – Ultimatum Computers indiegogo.com/ultimatumcomputers The Ultimatum.
CCGrid 2014 Improving I/O Throughput of Scientific Applications using Transparent Parallel Compression Tekin Bicer, Jian Yin and Gagan Agrawal Ohio State.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
E-science grid facility for Europe and Latin America E2GRIS1 André A. S. T. Ribeiro – UFRJ (Brazil) Itacuruça (Brazil), 2-15 November 2008.
1 Buses and types of computer. Paul Strickland Liverpool John Moores University.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures – Proposal n GROMACs.
Molecular Dynamics Simulations. Objective : To understand the properties of materials Question : How to accomplish the goal? Answer : Positions and momentums.
An FPGA Implementation of the Ewald Direct Space and Lennard-Jones Compute Engines By: David Chui Supervisor: Professor P. Chow.
Molecular Dynamics simulations
Covalent interactions non-covalent interactions + = structural stability of (bio)polymers in the operative molecular environment 1 Energy, entropy and.
Molecular Dynamics Simulations of Compressional Metalloprotein Deformation Andrew Hung 1, Jianwei Zhao 2, Jason J. Davis 2, Mark S. P. Sansom 1 1 Department.
1 The Performance Analysis of Molecular dynamics RAD GTPase with AMBER application on Cluster computing environtment. The Performance Analysis of Molecular.
Modelling proteins and proteomes using Linux clusters Ram Samudrala University of Washington.
National Energy Research Scientific Computing Center (NERSC) HPC In a Production Environment Nicholas P. Cardo NERSC Center Division, LBNL November 19,
Unit C-Hardware & Software1 GNVQ Foundation Unit C Bits & Bytes.
Dynameomics: Protein Mechanics, Folding and Unfolding through Large Scale All-Atom Molecular Dynamics Simulations INCITE 6 David A. C. Beck Valerie Daggett.
PuReMD Design Initialization – neighbor-list, bond-list, hydrogenbond-list and Coefficients of QEq matrix Bonded interactions – Bond-order, bond-energy,
Computer Performance. Hard Drive - HDD Stores your files, programs, and information. If it gets full, you can’t save any more. Measured in bytes (KB,
Biowulf: Molecular Dynamics and Parallel Computation Susan Chacko Scientific Computing Branch, Division of Computer System Services CIT, NIH.
J & H Automotive “Fast, Reliable Service… Guaranteed”
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
History of Computers and Performance David Monismith Jan. 14, 2015 Based on notes from Dr. Bill Siever and from the Patterson and Hennessy Text.
HPC Design of Ion Based Molecular Clusters A. Bende, C. Varodi, T. M. Di Palma INCDTIM – Istituto Motori-CNR, Napoli, Italy Romania-JINR cooperation framework.
Personal Computer (PC)  Computer advertisement specification Intel® Pentium 4 Processor at 3.06GHz with 512K cache 512MB DDR SDRAM 200GB ATA-100 Hard.
GROMACs GROningen MAchine for Chemical Simulations Eliana Bohorquez Puentes Nicolas Ortiz Gonzalez.
Brief introduction about “Grid at LNS”
Hardware specifications
Supervisor: Andreas Gellrich
Fundamentals of Molecular Dynamics Simulations
Your great subtitle in this line
المحور 3 : العمليات الأساسية والمفاهيم
CS 140 Lecture Notes: Technology and Operating Systems
Types of Computers Mainframe/Server
TeraScale Supernova Initiative
Motherboard External Hard disk USB 1 DVD Drive RAM CPU (Main Memory)
SAP HANA Cost-optimized Hardware for Non-Production
SAP HANA Cost-optimized Hardware for Non-Production
Parallel computing in Computational chemistry
Presentation transcript:

The Protein Folding Problem David van der Spoel Dept. of Cell & Mol. Biology Uppsala, Sweden

Image: U.S. Department of Energy Human Genome Program,

The protein folding problem

Molecular simulation ● Given the atomic coordinates of a set of molecules, compute energy and forces according to a classical Hamiltonian. ● Integrate Newton's equations of motion, with a timestep of 1-2 fs. ● Repeat for steps. ● Analyse the results.

Cost of folding simulations ● atoms ● 100 interactions per atom ● 50 floating point operations (flops) per interaction ● 10 9 time steps of 1-4 fs (yields 1-4 μ s) ● 5 x flops ● On 5 2 GHz this means 5x10 6 s = two months

Monolith / Linköping ● 400 Intel 2.2 Ghz ● Fast network ● 1 Gb RAM and 10 Gb disk / CPU

GROMACS scaling benchmarks

Conclusions ● 32 bit processors sufficient (Xeon/Opteron) ● Fast network (Scali/Myrinet) ● 512+ Mb of memory ● Large disk (Tb) ● Fast front-end ● Many processors