Download presentation
Presentation is loading. Please wait.
Published byFlorence Gallagher Modified over 9 years ago
1
1 Parallel Computing@University of Hyderabad CS-726 Parallel Computing By Rajeev Wankar wankarcs@uohyd.ernet.in
2
2 Parallel Computing@University of Hyderabad For whom Elective for M.Tech. and MCA
3
3 Parallel Computing@University of Hyderabad Objective By the end of the semester, students should be able to develop the following skills: Should be able to understand parallel algorithm paradigms and design efficient parallel algorithms. Given a practical application, identify design issues and should be able to write algorithms for the targeted machine. Develop skill to write/modify parallel library.
4
4 Parallel Computing@University of Hyderabad Prerequisite Knowledge of Introductory Algorithms, Networks, Java/C/C++, and Unix/Linux (as an OS and good if you know socket programming).
5
5 Parallel Computing@University of Hyderabad Course Outline Here is a preliminary and non-exhaustive list of topics we will be or might be covering. This is subject to change with advanced notice, partly based on the understanding of the students.
6
6 Parallel Computing@University of Hyderabad Unit 1 Introduction to Parallel Computing: Why Parallel Computing & Scope of Parallel Computing, Control and Data Approach, Models of parallel computation, Design paradigms of Parallel Algorithms.
7
7 Parallel Computing@University of Hyderabad Unit 2 Classification: Taxonomies: MPP, SMP, CC-NUMA, cluster: dedicated high performance (HP), high throughput (HT), data-intensive computing, Interconnection networks, Flynn‘s Taxonomy.
8
8 Parallel Computing@University of Hyderabad Unit 3 An overview of Practical Parallel Programming Paradigms: Programmability Issues, Programming Models: Message passing, client-server, peer-to-peer, Map & Reduce.
9
9 Parallel Computing@University of Hyderabad Unit 4 Clustering of Computers, Beowulf Supercomputer, Use of MPI in Cluster Computing. Debugging, Evaluating and tuning of Cluster Programs
10
10 Parallel Computing@University of Hyderabad Unit 5 Message passing standards: PVM (Parallel Virtual Machine), MPI (Message Passing Interface) Message Passing Interface (MPI) and its routines.
11
11 Parallel Computing@University of Hyderabad Unit 6 Performance Metrics & Speed Up: Types of Performance requirements, Basic Performance metrics; Workload Speed Metrics; Performance of Parallel Computers-Parallelism and interaction overheads;
12
12 Parallel Computing@University of Hyderabad Unit 6 Overview of Programming with Shared Memory: OpenMP (History, Overview, Programming Model, OpenMP Constructs, Performance Issues and examples, Explicit Parallelism: Advanced Features of OpenMP) Distributed Shared Memory programming using Jackal Introduction to “Programming Multi-Core Programming” through Software Multi-threading
13
13 Parallel Computing@University of Hyderabad Unit 7 Reconfigurable Computing What is it? Why? How to do it? Where to do it? Algorithms for Reconfigurable systems
14
14 Parallel Computing@University of Hyderabad Unit 8 (Applications) Built cluster using Rocks On Cluster Based algorithms and applications On Shared Memory Programming Writing subset of parallel libraries using socket programming in C or Java.
15
15 Parallel Computing@University of Hyderabad Assessment Internal: 40 Marks Three internals of 10 marks each (best two of three will be selected) Lab assignments 10 marks One Group assignment 5 marks Seminar 5 marks External: End semester examination 60 Marks.
16
16 Parallel Computing@University of Hyderabad Reference: Quinn, M. J., Parallel Computing: Theory and Practice (McGraw-Hill Inc.). Bary Wilkinson and Michael Allen: Parallel Programming Techniques using Networked of workstations and Parallel Computers, Prentice Hall, 1999. R. Buyya (ed.) High Performance Cluster Computing: Programming and Applications, Prentice Hall, 1999. William Gropp, Rusty Lusk, Tuning MPI Applications for Peak Performance, Pittsburgh (1996). Kai Hwang, Zhiwei Xu, Scalable Parallel Computing (Technology Architecture Programming) McGraw Hill Newyork (2004). W. Gropp, E. Lusk, N. Doss, A. Skjellum, A high performance portable implementation of the message passing Interface (MPI) standard, Parallel Computing 22 (6), Sep 1996. Gibbons, A., W. Rytter, Efficient Parallel Algorithms (Cambridge Uni. Press). Kumar V., et al., Introduction to Parallel Computing, Design and Analysis of Parallel Algorithms, Benjamin/Cummings, 1994. Shameem A and Jason, Multicore Programming, Intel Press, 2006
17
17 Parallel Computing@University of Hyderabad Questions
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.