DNA microarrays. Infinite Mixture Model-Based Clustering of DNA Microarray Data Using openMP.

Slides:



Advertisements
Similar presentations
May 2, 2015©2006 Craig Zilles1 (Easily) Exposing Thread-level Parallelism  Previously, we introduced Multi-Core Processors —and the (atomic) instructions.
Advertisements

1 Programming Explicit Thread-level Parallelism  As noted previously, the programmer must specify how to parallelize  But, want path of least effort.
Open[M]ulti[P]rocessing Pthreads: Programmer explicitly define thread behavior openMP: Compiler and system defines thread behavior Pthreads: Library independent.
Parallel Programming On the IUCAA Clusters Sunu Engineer.
Scientific Programming OpenM ulti- P rocessing M essage P assing I nterface.
1 Tuesday, November 07, 2006 “If anything can go wrong, it will.” -Murphy’s Law.
DISTRIBUTED AND HIGH-PERFORMANCE COMPUTING CHAPTER 7: SHARED MEMORY PARALLEL PROGRAMMING.
Computer Architecture II 1 Computer architecture II Programming: POSIX Threads OpenMP.
Parallel Programming by Tiago Sommer Damasceno Using OpenMP
1 ITCS4145/5145, Parallel Programming B. Wilkinson Feb 21, 2012 Programming with Shared Memory Introduction to OpenMP.
CSCI-6964: High Performance Parallel & Distributed Computing (HPDC) AE 216, Mon/Thurs 2-3:20 p.m. Pthreads (reading Chp 7.10) Prof. Chris Carothers Computer.
OpenMPI Majdi Baddourah
A Very Short Introduction to OpenMP Basile Schaeli EPFL – I&C – LSP Vincent Keller EPFL – STI – LIN.
CS 470/570 Lecture 7 Dot Product Examples Odd-even transposition sort More OpenMP Directives.
Introduction to OpenMP Introduction OpenMP basics OpenMP directives, clauses, and library routines.
1 Parallel Programming With OpenMP. 2 Contents  Overview of Parallel Programming & OpenMP  Difference between OpenMP & MPI  OpenMP Programming Model.
Programming with Shared Memory Introduction to OpenMP
CS470/570 Lecture 5 Introduction to OpenMP Compute Pi example OpenMP directives and options.
Executing OpenMP Programs Mitesh Meswani. Presentation Outline Introduction to OpenMP Machine Architectures Shared Memory (SMP) Distributed Memory MPI.
Lecture 5: Shared-memory Computing with Open MP. Shared Memory Computing.
Lecture 8: Caffe - CPU Optimization
Uses some of the slides for chapters 7 and 9 accompanying “Introduction to Parallel Computing”, Addison Wesley, 2003.
Chapter 17 Shared-Memory Programming. Introduction OpenMP is an application programming interface (API) for parallel programming on multiprocessors. It.
OpenMP Blue Waters Undergraduate Petascale Education Program May 29 – June
OpenMP: Open specifications for Multi-Processing What is OpenMP? Join\Fork model Join\Fork model Variables Variables Explicit parallelism Explicit parallelism.
O PEN MP (O PEN M ULTI -P ROCESSING ) David Valentine Computer Science Slippery Rock University.
04/10/25Parallel and Distributed Programming1 Shared-memory Parallel Programming Taura Lab M1 Yuuki Horita.
CS 838: Pervasive Parallelism Introduction to OpenMP Copyright 2005 Mark D. Hill University of Wisconsin-Madison Slides are derived from online references.
Hybrid MPI and OpenMP Parallel Programming
Work Replication with Parallel Region #pragma omp parallel { for ( j=0; j
High-Performance Parallel Scientific Computing 2008 Purdue University OpenMP Tutorial Seung-Jai Min School of Electrical and Computer.
Introduction to OpenMP
Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (
MPI and OpenMP.
Threaded Programming Lecture 2: Introduction to OpenMP.
Introduction to Pragnesh Patel 1 NICS CSURE th June 2015.
CS/EE 217 GPU Architecture and Parallel Programming Lecture 23: Introduction to OpenACC.
Heterogeneous Computing using openMP lecture 2 F21DP Distributed and Parallel Technology Sven-Bodo Scholz.
COMP7330/7336 Advanced Parallel and Distributed Computing OpenMP: Programming Model Dr. Xiao Qin Auburn University
1 ITCS4145 Parallel Programming B. Wilkinson March 23, hybrid-abw.ppt Hybrid Parallel Programming Introduction.
Distributed and Parallel Processing George Wells.
Introduction to OpenMP
SHARED MEMORY PROGRAMMING WITH OpenMP
Shared Memory Parallelism - OpenMP
Lecture 5: Shared-memory Computing with Open MP
CS427 Multicore Architecture and Parallel Computing
Loop Parallelism and OpenMP CS433 Spring 2001
Auburn University COMP7330/7336 Advanced Parallel and Distributed Computing A bug in the rwlock program Dr. Xiao Qin.
Open[M]ulti[P]rocessing
Introduction to OpenMP
September 4, 1997 Parallel Processing (CS 667) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Parallel Processing.
Computer Science Department
OpenMP Quiz B. Wilkinson January 22, 2016.
Lecture 14: Distributed Memory Systems & MPI vs. OpenMP
Multi-core CPU Computing Straightforward with OpenMP
Parallel Programming.
September 4, 1997 Parallel Processing (CS 730) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson Wed. Jan. 31, 2001 *Parts.
Hybrid Parallel Programming
Lab. 3 (May 11th) You may use either cygwin or visual studio for using OpenMP Compiling in cygwin “> gcc –fopenmp ex1.c” will generate a.exe Execute :
September 4, 1997 Parallel Processing (CS 730) Lecture 5: Shared Memory Parallel Programming with OpenMP* Jeremy R. Johnson *Parts of this lecture.
Programming with Shared Memory Introduction to OpenMP
Hybrid Parallel Programming
Hybrid Parallel Programming
Introduction to OpenMP
OpenMP Quiz.
Hybrid MPI and OpenMP Parallel Programming
Introduction to Parallel Computing
Hybrid Parallel Programming
Working in The IITJ HPC System
OpenMP Parallel Programming
Presentation transcript:

Infinite Mixture Model-Based Clustering of DNA Microarray Data Using openMP

DNA microarrays

Clustering MA data (not computers) Conditions Clustering Genes

Why do I need sooo much computing power? Goal: Determine all posterior pairwise probabilities of two genes/samples belonging to same cluster IMMs cannot be solved analytically Use sampling method to approximate posterior probabilities Typically 10,000 iterations Sevaral 1,000 genes 10 … 200 samples Compute O(genes2 x samples2) probabilities per iteration Some overhead for cluster-reassignment, other model parameters

Open Multi-Processing (OpenMP) Facilitates parallelization of C++ and Fortran code for shared memory environments e.g. multi-processor machines Set of compiler directives, system variables, and library functions Platform-independent Website: http://www.openmp.org/ Parallelize sequential code by using compiler directives Relatively small programming effort Reduced risk for programming errors Use of shared memory Reduces the communication overhead required to synchronize multiple threads But cannot run threads on multiple nodes

“Hello World” in openMP #include <stdio.h> #include <omp.h> int main(int argc, char *argv[]) { int id, nthreads; #pragma omp parallel private(id) { id = omp_get_thread_num(); printf("Hello World from thread %d\n", id); #pragma omp barrier if ( id == 0 ) { nthreads = omp_get_num_threads(); printf("There are %d threads\n",nthreads); } return 0;

OpenMP examples Header Library Functions Compiler directives #include <omp.h> Library Functions omp_set_num_threads(4); printf("number of threads %d\n", omp_get_num_threads()); Compiler directives #pragma omp for schedule(dynamic, 1) for(j=0;j<=Q;j++){ clusterProbabilities[j]= getProbCsMissing2(i,j,Contexts); }

OpenMP examples More Compiler directives #pragma omp for for(j=0;j<=Q;j++){ … #pragma omp critical sigmas[i][j]=1.0/gengam(beta[i]* v[i]/2.0,beta[i]/2.0); }

OpenMP examples More Compiler directives … same as … int i; #pragma omp parallel for private(i, pos) for(j=0;i<T;i++){ … } … same as … #pragma omp parallel for private(pos)

Some more compiler directives Reduction #pragma omp do reduction (+:sum)  summarize the share variable “sum” Parallel region #pragma omp parallel { … } Sections #pragma omp sections #pragma omp section Code block 1 Code block 2

The making-of Start an interactive session Intel compiler g++ compiler jfreuden@fructose:~> qsub -I -l nodes=1:opteron Intel compiler jfreuden@bmi-opt2-01:~> module load openmpi-intel jfreuden@bmi-opt2-01:~> icpc -w –openmp g++ compiler jfreuden@bmi-opt2-01:~> module load gcc-4.2.3 jfreuden@bmi-opt2-01:~> g++ -fopenmp

The batch file #PBS -S /bin/csh #PBS -l nodes=1:opteron:ppn=2 #PBS -l walltime=18:00:00 #PBS –e /users/jfreuden/runGimm/stderr.txt #PBS -o /users/jfreuden/runGimm/stdout.txt setenv OMP_NUM_THREADS `cat $PBS_NODEFILE | grep $HOST | wc -l` module load intel cd /users/jfreuden/runGimm/ R CMD BATCH runGimm.R

Simulation study: Non-informative samples 4 gene clusters of sizes 20, 20, 80, and 80 3 sample clusters of size 5 Additional samples m+ = 5, 10, 20, 50, 100 No change in expression Same noise level 100 repeats for each level

Simulation study: Non-informative samples

Simulation study: Non-informative samples

Simulation study: Non-informative samples

Questions? Comments?

Additional Slides

Clustering D’haeseleer (2005)

Example for Gibbs Sampling: BUGS

Simulation study: Simple Case

Simulation study: Simple Case

Simulation study: ‘Time course 1’

Simulation study: ‘Time course 2’

Simulation study: Non-informative samples

Simulation study: Non-informative samples