Parallelization of Classification Algorithms For Medical Imaging on a Cluster Computing System 指導教授 : 梁廷宇 老師 系所 : 碩光通一甲 姓名 : 吳秉謙 學號 : 1095319127.

Slides:



Advertisements
Similar presentations
Adding the Easy Button to the Cloud with SnowFlock and MPI Philip Patchin, H. Andrés Lagar-Cavilla, Eyal de Lara, Michael Brudno University of Toronto.
Advertisements

System Area Network Abhiram Shandilya 12/06/01. Overview Introduction to System Area Networks SAN Design and Examples SAN Applications.
Beowulf Supercomputer System Lee, Jung won CS843.
PARALLEL PROCESSING COMPARATIVE STUDY 1. CONTEXT How to finish a work in short time???? Solution To use quicker worker. Inconvenient: The speed of worker.
Reference: Message Passing Fundamentals.
1 Distributed Computing Algorithms CSCI Distributed Computing: everything not centralized many processors.
Design and Performance Evaluation of Queue-and-Rate-Adjustment Dynamic Load Balancing Policies for Distributed Networks Zeng Zeng, Bharadwaj, IEEE TRASACTION.
Active Messages: a Mechanism for Integrated Communication and Computation von Eicken et. al. Brian Kazian CS258 Spring 2008.
1 Component Description Alice 3d Graphics Software Human Computer Interaction Institute Carnegie Mellon University Prepared by: Randy Pausch,
 Parallel Computer Architecture Taylor Hearn, Fabrice Bokanya, Beenish Zafar, Mathew Simon, Tong Chen.
Router modeling using Ptolemy Xuanming Dong and Amit Mahajan May 15, 2002 EE290N.
NPACI: National Partnership for Advanced Computational Infrastructure August 17-21, 1998 NPACI Parallel Computing Institute 1 Cluster Archtectures and.
FLANN Fast Library for Approximate Nearest Neighbors
1 A survey on Reconfigurable Computing for Signal Processing Applications Anne Pratoomtong Spring2002.
Sort-Last Parallel Rendering for Viewing Extremely Large Data Sets on Tile Displays Paper by Kenneth Moreland, Brian Wylie, and Constantine Pavlakos Presented.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
Parallel Architectures
CLUSTER COMPUTING Prepared by: Kalpesh Sindha (ITSNS)
Abstract Load balancing in the cloud computing environment has an important impact on the performance. Good load balancing makes cloud computing more.
Reference: / Parallel Programming Paradigm Yeni Herdiyeni Dept of Computer Science, IPB.
Abstract Provable data possession (PDP) is a probabilistic proof technique for cloud service providers (CSPs) to prove the clients' data integrity without.
Tomographic mammography parallelization Juemin Zhang (NU) Tao Wu (MGH) Waleed Meleis (NU) David Kaeli (NU)
CS492: Special Topics on Distributed Algorithms and Systems Fall 2008 Lab 3: Final Term Project.
A Cloud is a type of parallel and distributed system consisting of a collection of inter- connected and virtualized computers that are dynamically provisioned.
Predicting performance of applications and infrastructures Tania Lorido 27th May 2011.
 What is an operating system? What is an operating system?  Where does the OS fit in? Where does the OS fit in?  Services provided by an OS Services.
林俊宏 Parallel Association Rule Mining based on FI-Growth Algorithm Bundit Manaskasemsak, Nunnapus Benjamas, Arnon Rungsawang.
Computing in the RAIN: A Reliable Array of Independent Nodes Group A3 Ka Hou Wong Jahanzeb Faizan Jonathan Sippel.
So, Jung-ki Distributed Computing System LAB School of Computer Science and Engineering Seoul National University Implementation of Package Management.
Distributed Shared Memory: A Survey of Issues and Algorithms B,. Nitzberg and V. Lo University of Oregon.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Chapter 2: Operating-System Structures. 2.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Jan 14, 2005 Operating System.
Introduction to Apache Hadoop Zibo Wang. Introduction  What is Apache Hadoop?  Apache Hadoop is a software framework which provides open source libraries.
The MPC Parallel Computer Hardware, Low-level Protocols and Performances University P. & M. Curie (PARIS) LIP6 laboratory Olivier Glück.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
AN EXTENDED OPENMP TARGETING ON THE HYBRID ARCHITECTURE OF SMP-CLUSTER Author : Y. Zhao 、 C. Hu 、 S. Wang 、 S. Zhang Source : Proceedings of the 2nd IASTED.
1 Next Few Classes Networking basics Protection & Security.
A Virtual Machine Monitor for Utilizing Non-dedicated Clusters Kenji Kaneda Yoshihiro Oyama Akinori Yonezawa (University of Tokyo)
BLU-ICE and the Distributed Control System Constraints for Software Development Strategies Timothy M. McPhillips Stanford Synchrotron Radiation Laboratory.
Research on Reconfigurable Computing Using Impulse C Carmen Li Shen Mentor: Dr. Russell Duren February 1, 2008.
Cluster Workstations. Recently the distinction between parallel and distributed computers has become blurred with the advent of the network of workstations.
MRPGA : An Extension of MapReduce for Parallelizing Genetic Algorithm Reporter :古乃卉.
Chapter 3: Computer Software. Stored Program Concept v The concept of preparing a precise list of exactly what the computer is to do (this list is called.
Operating Systems. Definition An operating system is a collection of programs that manage the resources of the system, and provides a interface between.
Silberschatz, Galvin and Gagne  2002 Modified for CSCI 399, Royden, Operating System Concepts Operating Systems Lecture 7 OS System Structure.
Content Sharing over Smartphone-Based Delay- Tolerant Networks.
A Performance Comparison of DSM, PVM, and MPI Paul Werstein Mark Pethick Zhiyi Huang.
PARALLEL COMPUTING overview What is Parallel Computing? Traditionally, software has been written for serial computation: To be run on a single computer.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
By Chi-Chang Chen.  Cluster computing is a technique of linking two or more computers into a network (usually through a local area network) in order.
A N I N - MEMORY F RAMEWORK FOR E XTENDED M AP R EDUCE 2011 Third IEEE International Conference on Coud Computing Technology and Science.
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
1 Distributed BDD-based Model Checking Orna Grumberg Technion, Israel Joint work with Tamir Heyman, Nili Ifergan, and Assaf Schuster CAV00, FMCAD00, CAV01,
1 Distributed Processing Chapter 1 : Introduction.
Wide-Area Parallel Computing in Java Henri Bal Vrije Universiteit Amsterdam Faculty of Sciences vrije Universiteit.
June, 1999©Vanu, Inc. Vanu Bose Vanu, Inc. Programming the Physical Layer in Wireless Networks.
Specialized Virtual Configurable Arrays Dominique Lavenier - Frederic Raimbault IRISA Rennes, France UBS Vannes, France
Today's Software For Tomorrow's Hardware: An Introduction to Parallel Computing Rahul.S. Sampath May 9 th 2007.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Load Rebalancing for Distributed File Systems in Clouds.
Background Computer System Architectures Computer System Software.
SYSTEM MODELS FOR ADVANCED COMPUTING Jhashuva. U 1 Asst. Prof CSE
Data Stream Managing Unit Intermediate Presentation Advisor : Vitaly Spector Students : Neomi Makleff Hadas Azulay Lab : High Speed Digital Systems.
Accelerating K-Means Clustering with Parallel Implementations and GPU Computing Janki Bhimani Miriam Leeser Ningfang Mi
CLUSTER COMPUTING Presented By, Navaneeth.C.Mouly 1AY05IS037
Web Server Administration
Distributed Computing:
Operating System Introduction.
A Virtual Machine Monitor for Utilizing Non-dedicated Clusters
Types of Parallel Computers
Presentation transcript:

Parallelization of Classification Algorithms For Medical Imaging on a Cluster Computing System 指導教授 : 梁廷宇 老師 系所 : 碩光通一甲 姓名 : 吳秉謙 學號 :

Introduction Single Program Multiple Data (SPMD) parallel implementation of image classification algorithms on a cluster comprised of personal computers 2 different Application Programming Interfaces (APls) 3 Image Classification Algorithms

The current trend in parallel processing is towards low cost highly available cluster computers constructed from networking individual systems. Such small-scale clusters offer many advantages: 1.inexpensive to assemble 2.easy to maintain 2.easy to maintain 3.extremely fault-tolerant 3.extremely fault-tolerant 4. surprisingly powerful and relatively easy to 4. surprisingly powerful and relatively easy to program program

Image Classification System Goals 1.The processing a continuous stream of image data 2. 2.Data re-configurable during run-time for varying size images and image sets 3. 3.process re-configurable so that individual processes and processor nodes could be added, deleted, or modified to reflect changes in the processing environment 4. 4.process re-configurable so that different classifications operations could be performed 5. 5.conducive to software modification, extension and rapid application development

Data Scheduling Each Worker process is given an approximately equal size portion of the image to operate on A Worker’s image partition is distributed by the Manager to the Worker in row-by-row fashion, through individual messages.

Object-oriented design

For MPI and Paradise 8 PC platforms (Pentium 166MHz systems) 32 MB running Microsoft Windows NT 25Mbs ATM switch and lOMbs Ethernet

Context-Independent Image Classifier Performance Nearest Mean Maximum Likelihood K Nearest Neighbors

Nearest Mean

Maximum Likelihood

K Nearest Neighbors

Sample Images

Conclusions We have taken 3 algorithms (Nearest Mean ; Maximum Likelihood ; K Nearest Neighbors )and parallelized them using both message passing(MPI) and virtual shared memory (Paradise) communications on a small-scale cluster. Both MPI and Paradise generate an overhead but can still be effectively used if the algorithm computational burden is large enough to compensate for additional communications cost. Paradise far out performed MPI for the classifiers developed, under the hardware and software suite utilized. The small-scale cluster also proved to be reliable and highly effective environment.

Thanks for your attention