Simulation of Anomalous Diffusion in Parallel Karin Leiderman Final Project – December 3, 2003 Math 471 Dr. Tim Warburton.

Slides:



Advertisements
Similar presentations
Computer Architecture
Advertisements

Practical techniques & Examples
Sharks and Fishes – The problem
Measurement & Significant Figures
9/20/6Lecture 3 - Instruction Set - Al1 The Hardware Interface.
Poor Mans Cluster (PMC) Johny Balian. Outline What is PMC How it works Concept Positive aspects Negative aspects Good and Bad Application ideas Monte.
1 Fault-Tolerance Techniques for Mobile Agent Systems Prepared by: Wong Tsz Yeung Date: 11/5/2001.
MPI Labs Simulation of an colony of ants Camille Coti QosCosGrid Barcelona meeting, 10/25/06.
Parallel & Cluster Computing Distributed Cartesian Meshes Paul Gray, University of Northern Iowa David Joiner, Shodor Education Foundation Tom Murphy,
Generated Waypoint Efficiency: The efficiency considered here is defined as follows: As can be seen from the graph, for the obstruction radius values (200,
Game of Life Courtesy: Dr. David Walker, Cardiff University.
The Princeton Egg The Global Consciousness Project (video)The Global Consciousness Project (video) Princeton Egg Website Our Egg: PrincetonEgg.cppPrincetonEgg.cpp.
Point-to-Point Communication Self Test with solution.
Performance Analysis and Monitoring Facilities in CPN Tools Tutorial CPN’05 October 25, 2005 Lisa Wells.
High Performance Computing 1 Parallelization Strategies and Load Balancing Some material borrowed from lectures of J. Demmel, UC Berkeley.
Bluenet a New Scatternet Formation Scheme * Huseyin Ozgur Tan * Zifang Wang,Robert J.Thomas, Zygmunt Haas ECE Cornell Univ*
Tracking a maneuvering object in a noisy environment using IMMPDAF By: Igor Tolchinsky Alexander Levin Supervisor: Daniel Sigalov Spring 2006.
Lecture 8 Objectives Material from Chapter 9 More complete introduction of MPI functions Show how to implement manager-worker programs Parallel Algorithms.
UNIVERSITY OF JYVÄSKYLÄ Resource Discovery Using NeuroSearch Presentation for the Agora Center InBCT-seminar Mikko Vapa, researcher InBCT 3.2.
Gravity and Orbits The gravitational force between two objects:
CS 179: GPU Programming Lecture 20: Cross-system communication.
Chapter 91 Memory Management Chapter 9   Review of process from source to executable (linking, loading, addressing)   General discussion of memory.
Roger ZimmermannCOMPSAC 2004, September 30 Spatial Data Query Support in Peer-to-Peer Systems Roger Zimmermann, Wei-Shinn Ku, and Haojun Wang Computer.
Working with Numbers in Alice - Converting to integers and to strings - Rounding numbers. - Truncating Numbers Samantha Huerta under the direction of Professor.
How to make a presentation (Oral and Poster) Dr. Bernard Chen Ph.D. University of Central Arkansas July 5 th Applied Research in Healthy Information.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 7 Sampling Distributions.
The Pipeline Processing Framework LSST Applications Meeting IPAC Feb. 19, 2008 Raymond Plante National Center for Supercomputing Applications.
Week 2 - Wednesday CS361.
ROBUST RESOURCE ALLOCATION OF DAGS IN A HETEROGENEOUS MULTI-CORE SYSTEM Luis Diego Briceño, Jay Smith, H. J. Siegel, Anthony A. Maciejewski, Paul Maxwell,
MA471Fall 2003 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
The Cluster Computing Project Robert L. Tureman Paul D. Camp Community College.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Tekin Bicer Gagan Agrawal 1.
UPC Applications Parry Husbands. Roadmap Benchmark small applications and kernels —SPMV (for iterative linear/eigen solvers) —Multigrid Develop sense.
Scheduling Many-Body Short Range MD Simulations on a Cluster of Workstations and Custom VLSI Hardware Sumanth J.V, David R. Swanson and Hong Jiang University.
IT253: Computer Organization Lecture 3: Memory and Bit Operations Tonga Institute of Higher Education.
Warm-up 7.1 Sampling Distributions. Ch. 7 beginning of Unit 4 - Inference Unit 1: Data Analysis Unit 2: Experimental Design Unit 3: Probability Unit 4:
April 26, CSE8380 Parallel and Distributed Processing Presentation Hong Yue Department of Computer Science & Engineering Southern Methodist University.
Hybrid MPI and OpenMP Parallel Programming
Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Chapter 7 Sampling Distributions.
Games Development 2 Concurrent Programming CO3301 Week 9.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
1 Overview on Send And Receive routines in MPI Kamyar Miremadi November 2004.
Summary of my work in September and my plan What I have done and my status –Finished programming and fixed some bugs of the preliminary version of the.
Copyright 2006 by Timothy J. McGuire, Ph.D. 1 MIPS Assembly Language CS 333 Sam Houston State University Dr. Tim McGuire.
Simulation Using computers to simulate real- world observations.
Advanced EM - Master in Physics We have now calculated all the intermediate derivatives which we need for calculating the fields E and B. We.
1 Typical performance bottlenecks and how they can be found Bert Wesarg ZIH, Technische Universität Dresden.
MA471Fall 2002 Lecture5. More Point To Point Communications in MPI Note: so far we have covered –MPI_Init, MPI_Finalize –MPI_Comm_size, MPI_Comm_rank.
A Fault-Tolerant Environment for Large-Scale Query Processing Mehmet Can Kurt Gagan Agrawal Department of Computer Science and Engineering The Ohio State.
Industrial Project (236504) Advanced programming tools for refactoring Java code in Eclipse Student: Alexander Libov Supervisor: Dr. Ran Ettinger, IBM.
Advanced Computer Networks Lecture 1 - Parallelization 1.
Master/Workers Model Example Research Computing UNC - Chapel Hill Instructor: Mark Reed
Image Processing A Study in Pixel Averaging Building a Resolution Pyramid With Parallel Computing Denise Runnels and Farnaz Zand.
Copyright 2006 by Timothy J. McGuire, Ph.D. 1 MIPS Assembly Language CS 333 Sam Houston State University Dr. Tim McGuire.
Methods Chapter 6. 2 Program Modules in Java What we call "functions" in C++ are called "___________________" in Java Purpose –Reuse code –Modularize.
Optimizing Parallel Programming with MPI Michael Chen TJHSST Computer Systems Lab Abstract: With more and more computationally- intense problems.
Introduction to Computing Systems and Programming Programming.
Dynamic Load Balancing Tree and Structured Computations.
MONTE CARLO TRANSPORT SIMULATION Panda Computing Week 2012, Torino.
Matrix Multiplication in Hadoop
L131 Assignment Operators Topics Increment and Decrement Operators Assignment Operators Debugging Tips rand( ) math library functions Reading Sections.
Courtesy: Dr. David Walker, Cardiff University
Lecture 19 MA471 Fall 2003.
Saliency, Scale and Image Description (by T. Kadir and M
Lecture 14: Inter-process Communication
May 19 Lecture Outline Introduce MPI functionality
Using compiler-directed approach to create MPI code automatically
Volume 83, Issue 5, Pages (November 2002)
Excursions into Parallel Programming
Presentation transcript:

Simulation of Anomalous Diffusion in Parallel Karin Leiderman Final Project – December 3, 2003 Math 471 Dr. Tim Warburton

Motivation 1: FRAP Monitoring the fluorescence before photobleaching. Monitoring the fluorescence after photobleaching Monitoring the recovery of fluorescence after photobleaching This is a measurement of the "diffusional mobility" which is usually called lateral mobility since this experiment is almost always done is a planar lipid bilayer.

Motivation 2: Single Particle Tracking

Decide on probability distribution function: –The probability of a large jump is low and the probability of a small jump is high –Create a function that returns the jump size when given a random number between 0 and 1. P(x =.25) = P(x =.125) = P(x = ) = P(x = ) = P(x = ) =0.125 P(x = ) =0.25 P(x = ) =0.5 x = jump size Sum = 1

Pseudo Code: Create a class ‘Particle’ –Holds the x and y coordinates, status and type –Coordinates: x in [0,2n], y in [0,n], n = sqrt(Nprocs) –Status = 0 if the particle will stay on proc and 1 if the particle will leave –Type = 0 if the particle originated outside of the photobleaching region, and 1 if it originated inside the photobleaching region. –Label = particle number Generate all the particles – equal number on each processor For each time step { –Run boundaryCheck function: How many particles are within the maximum jump size of the boundary, or in other words…What is the maximum number of particles that can leave at the next time step? –Generate the new locations –Check how many particles are in the photobleaching region –Run escapeCheck function: How many particles will actually leave the processor? –Now we can calculate how many particles will come and go from each processor

Pseudo Code continued… For each processor (except self): –As in the sparse matrix-vector multiplication scheme, need to tell each processor how many particles it will send MPI_Isend this number MPI_Wait for the send to complete –Now, send the particles…but this is more difficult than it seems –Each vector of particles is of type Particle, so in order to send, we must create our own MPI data type: MPI_Datatype Particles; MPI_Datatype Type[5]={MPI_DOUBLE,MPI_DOUBLE,MPI_INT,MPI_INT,MPI_INT}; int blocklen[5]={1,1,1,1,1}; MPI_Aint disp[5]; int base; MPI_Address(&escapes[0][0].position_x,disp); MPI_Address(&escapes[0][0].position_y,disp+1); MPI_Address(&escapes[0][0].type,disp+2); MPI_Address(&escapes[0][0].label,disp+3); MPI_Address(&escapes[0][0].status,disp+4); base = disp[0]; for (i=0;i<5;i++){ disp[i] -= base; } MPI_Type_struct(5,blocklen,disp,Type,&Particles); MPI_Type_commit(&Particles); class Particle{ public: double position_x; double position_y; int type; int label; int status; };

Pseudo Code continued… For each processor { –MPI_IRecv the number of particles to arrive –Now figure out the new size of the vector of particles New size = old number of particles + incoming – outgoing –Reallocate space for the new vector, bigger or smaller: Status 0 Status 1 Status 0 Status 1 Status 0 for(r=0;r<num_particles;r++){ if(particle[r].status==0){ temp_particle=particle[r]; particle[new_num]=temp_particle; new_num++; } 1. Set to a temp 2. Re-fill vector 3. Then realloc

Pseudo Code continued… For each processor { –MPI_Recv the new particles Meanwhile… –Each processor is outputting the coordinate information to separate files –Processor 0 holds the photobleaching region and outputs the number of particles inside at each time step Area of photobleaching region / total Area * total number of particles could give a good estimate of the number of particles in the region at any given time This number is ~ 980 particles we can see that this is not the case I will play around with the PDF to see what kind of numbers I can get…

Profile: 4 processors, 40,000 particles, 400 time steps Time steps

Profile: 16 processors, 40,000 particles, 40 time steps Time steps * Number of particles on each processor proc_0.txt proc_1.txt proc_10.txt proc_11.txt proc_12.txt proc_13.txt proc_14.txt proc_15.txt proc_2.txt proc_3.txt proc_4.txt proc_5.txt proc_6.txt proc_7.txt proc_8.txt proc_9.txt total Smaller load on these processors, can we see this from upshot???

At each time step, each processor has a different number of particles, so the workload is not balanced, but it does not seem to have any significance overall.

400 Particles (gold labeled proteins) 40 time steps

Just for fun…not an area preserving map

Future Directions and Acknowledgements: I will be working on this code for the next few months as I hope to develop it into s master’s thesis. As Dr. Warburton pointed out, the need for each processor to have control over each particle and physically send it to another processor is kind of useless for doing the simulation I have just showed you, although this will be very important as I build this code to have clusters and be in 3 dimensions…so all my work is not lost or without purpose!!! Thanks to Ken Jacobsen’s group at UNC, Chapel Hill, for the movie Thanks to STMC at UNM Health Sciences Center for the images and data