Using Remote HPC Resources to Teach Local Courses

Slides:



Advertisements
Similar presentations
Oklahoma Supercomputing Symposium Getting HPC into Regional University Curricula with Few Resources Karl Frinkle - Mike Morris.
Advertisements

Program Analysis and Tuning The German High Performance Computing Centre for Climate and Earth System Research Panagiotis Adamidis.
Teaching Parallel Computing using Beowulf Clusters: A Laboratory Approach Phil Prins Seattle Pacific University October 8, 2004
High Performance Computing and CyberGIS Keith T. Weber, GISP GIS Director, ISU.
Information Technology Center Introduction to High Performance Computing at KFUPM.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Parallel Programming on the SGI Origin2000 With thanks to Moshe Goldberg, TCC and Igor Zacharov SGI Taub Computer Center Technion Mar 2005 Anne Weill-Zrahia.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
1 Computer Science, University of Warwick Architecture Classifications A taxonomy of parallel architectures: in 1972, Flynn categorised HPC architectures.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Universidad Politécnica de Baja California. Juan P. Navarro Sanchez 9th level English Teacher: Alejandra Acosta The Beowulf Project.
Parallel Computing The Bad News –Hardware is not getting faster fast enough –Too many architectures –Existing architectures are too specific –Programs.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
Oklahoma Supercomputing Symposium 2012 Building Courses Around MPI & Cuda with a LittleFe Karl Frinkle Mike Morris.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
1b.1 Types of Parallel Computers Two principal approaches: Shared memory multiprocessor Distributed memory multicomputer ITCS 4/5145 Parallel Programming,
Cyberinfrastructure for Distributed Rapid Response to National Emergencies Henry Neeman, Director Horst Severini, Associate Director OU Supercomputing.
Hands-On Exploration of Parallelism for Absolute Beginners With Scratch Steven Bogaerts Department of Mathematics & Computer Science Wittenberg University.
MapReduce How to painlessly process terabytes of data.
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
EG280 Computer Science for Engineers Fundamental Concepts Chapter 1.
ITCS 4/5145 Parallel Programming, UNC-Charlotte, B. Wilkinson, Dec 26, 2012outline.1 ITCS 4145/5145 Parallel Programming Spring 2013 Barry Wilkinson Department.
CSci6702 Parallel Computing Andrew Rau-Chaplin
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
High Energy Physics at the OU Supercomputing Center for Education & Research Henry Neeman, Director OU Supercomputing Center for Education & Research University.
INTRODUCTION TO HIGH PERFORMANCE COMPUTING AND TERMINOLOGY.
CMPS Operating Systems Prof. Scott Brandt Computer Science Department University of California, Santa Cruz.
Supercomputing versus Big Data processing — What's the difference?
Introduction to threads
MASS Java Documentation, Verification, and Testing
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
Clouds , Grids and Clusters
2. OPERATING SYSTEM 2.1 Operating System Function
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
CLUSTER COMPUTING Presented By, Navaneeth.C.Mouly 1AY05IS037
ICS103 Programming in C Lecture 1: Overview of Computers & Programming
Pattern Parallel Programming
Grid Computing.
The University of Adelaide, School of Computer Science
Operating Systems Introduction ENCE 360.
Constructing a system with multiple computers or processors
Introduction to XSEDE Resources HPC Workshop 08/21/2017
Team 1 Aakanksha Gupta, Solomon Walker, Guanghong Wang
Is System X for Me? Cal Ribbens Computer Science Department
Inculcating “Parallel Programming” in UG curriculum
Computer Science I CSC 135.
Lecture 7: Introduction to Distributed Computing.
Chapter 4: Threads.
Dr. Barry Wilkinson © B. Wilkinson Modification date: Jan 9a, 2014
Chapter 4: Threads.
Introduction to Research Facilitation
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Henry Neeman, University of Oklahoma
CSE8380 Parallel and Distributed Processing Presentation
Constructing a system with multiple computers or processors
Dr. Tansel Dökeroğlu University of Turkish Aeronautical Association Computer Engineering Department Ceng 442 Introduction to Parallel.
Multithreaded Programming
By Brandon, Ben, and Lee Parallel Computing.
Java Programming Introduction
Human Media Multicore Computing Lecture 1 : Course Overview
Types of Parallel Computers
Introduction to Research Facilitation
OU BATTLECARD: Oracle Linux Training and Certification
WeBWork and Open Educational Resources (OER)
Presentation transcript:

Using Remote HPC Resources to Teach Local Courses 11/16/2018  Using Remote HPC Resources to Teach Local Courses Oklahoma Supercomputing Symposium 10/06/10 Larry F. Sells, Oklahoma City University Clay B. Carley III, East Central University Chao (Charlie) Zhao, Cameron University

Larry F. Sells Department of Computer Science Oklahoma City University 11/16/2018 The Impact of OSCER on Software Engineering at Oklahoma City University Larry F. Sells Department of Computer Science Oklahoma City University

Software Engineering at OCU Same instructor for many years SE concepts and team/project course The last 4 semesters project focus has been on MPI and OpenMP

Instructor’s Training OU Supercomputing Center for Education and Research (OSCER) resources National Computational Science Institute (NCSI)/SC07-09 HPC summer Parallel Computing workshops 2005, 2007, 2009, 2010 Importance of NCSI summer 2010 Intermediate Parallel Computing workshop in pulling many things together

Course Objectives Engage students in a first course in software engineering (Roger Pressman text) Help students work in a UNIX, C, MPI environment Help student teams create MPI project code along with SE documentation (requirements, design, test plan, user manual, final source code, executables, and report)

Software Engineering Fall 2010 - Prerequisites 2 years experience in C, C++, or Java Knowledge of data structures Basic background in Linux (UNIX) helpful No previous study of parallel programming, HPC, or MPI No previous knowledge of cryptology

Software Engineering Fall Project Inspired by Simon Singh’s “Cipher Challenge” Ciphers include: homophonic, Vigenere, Playfair, ADFGVX, DES, and RSA Goal is to decipher Singh’s ciphertexts using MPI and C or C++ and to develop appropriate SE documentation

Dr. Henry Neeman, OSCER, and Sooner OSCER operations team created Sooner accounts for SE students Henry Neeman and Josh Alexander came to OCU to do an introduction to Sooner lab Importance of Neeman’s 11 SiPE (Supercomputing in Plain English) presentations – especially #5 and #6 We are working to set up an OU Sooner tour – gives gut understanding of a cluster.

Relevant OSCER 2010 Workshop Ideas Client/server, data parallelism, task parallelism, and pipeline parallel strategies Comparing MPI output on Sooner and Earlham cluster MPI debugging Introduction to CUDA Introduction to hybrid HPC – CUDA and MPI

Department of Computer Science East Central University 11/16/2018 Sooner is Better Clay B. Carley III Department of Computer Science East Central University

Parallel Programming The Future 09/22/10 Parallel Programming The Future High Performance Computing The Cloud Multicore Architectures Equals => more pressure on future graduates to understand parallel programming 11

Parallel Programming Spring 2010 - Prerequisites 09/22/10 Parallel Programming Spring 2010 - Prerequisites Experience in C Linux (UNIX) experience No previous study of parallel programming, HPC (High Performance Computing) or MPI (Message Passing Interface) before No experience with batch processing 12

09/22/10 Course Objectives Engage students in a first course in parallel programming (“Parallel Programming with MPI,” Pacheco) Help students work in a C, MPI, batch environment Help students understand the different parallel computing architectures 13

09/22/10 OSCER Resources Priming the Pump with Dr. Henry Neeman's “Supercomputing in Plain English” slides Hardware and Memory Issues Workshop links MPI – examples are available in C and FORTRAN 14

Priming the Pump Starting from scratch 09/22/10 Priming the Pump Starting from scratch Getting an Instructor's account on Sooner Online Resources Workshops PowerPoint Slides Exercises and Code Examples 15

Course Kickoff Visit by Dr. Neeman and Josh Alexander 09/22/10 Course Kickoff Visit by Dr. Neeman and Josh Alexander Connecting to Sooner for the first time Guidance for first exercises Q&A about supercomputing and supercomputers 16

The Sooner Linux Cluster 09/22/10 The Sooner Linux Cluster 1,072 Intel Xeon CPU chips/4288 cores 8,768 GB RAM ~105 TB globally accessible disk QLogic Infiniband Force10 Networks Gigabit Ethernet Red Hat Enterprise Linux 5 Peak speed: 34.45 TFLOPs* *TFLOPs: trillion calculations per second sooner.oscer.ou.edu

Sooner Benefits for Course 09/22/10 Sooner Benefits for Course “real supercomputer environment” Ability to explore different options to see what impact they have on performance Increasing/decreasing number of cores Increasing/decreasing number of processes Increasing/decreasing granularity of the problem 18

Computing and Technology Department 11/16/2018 Methods for Teaching Some Basic Concepts of Parallel Computing to Undergraduate CS Students at Cameron University Chao Zhao Associate Professor Computing and Technology Department Cameron University

Parallel Computing at Cameron 11/16/2018 Parallel Computing at Cameron Cameron is a five year regional public university. BS in Computer Science is offered in Computing and Technology Department. CS 3813 Parallel Computing is a required course in CS curriculum (ACM 2000). MPI is used as message passing library. OSCER has been used as significant teaching resources.

Instructor’s Training and Cooperation with OSCER OU HPC Summer Workshops (06, 07, 08, 09) Inviting supercomputing expert to deliver speech to students (Dr. Neeman: Basic Parallel concepts and Logics) Visiting OSCER Supercomputing Center Using OSCER’s supercomputer to run students’ parallel programs: Dr. Neeman and Josh Alexander campus visits Sooner account for each student

Why Parallel Computing? Take advantage of multiple core machines Parallel approach may improve computing efficiency: Sp(n) = Ts / Tp Ep = Ts / (Tp ∙ n ) or Ep = Sp / n Solve some problems that CANNOT be solved by sequential approach No speed limit in theory

Parallel Program Logical Structure Main function { common part (variables declaration and initialization); if ( myrank equal master) code that will be executed by the master process; } else { code that will be executed by slave processes; program termination part;

Teaching Methods Job Balance (Matrix Multiplication) A (m, n) * B (m’, n’) = C (n, m’) Master process does the following in order: Broadcasting matrix B to all slave processes; Sending a row of matrix A to each process. Receiving a row of matrix C from a slave process. Copying the received row into matrix C If the number of sent rows is less than the number of rows in matrix A, send a row to an idle process that completed its task. Repeat C, D, and E until the job is done. A Slave process does: Receiving matrix B; Receiving a row r of matrix A; Multiplying row r to matrix B to produce a row of matrix C Sending the resulted row back to the master process Repeating B, C, and D until the completion notice is received.

Teaching Methods (continued) Communicator Creation Monte Carlo method to compute π Master process generates a set of random number repeatedly until it is noticed to terminated. Slave processes use the random numbers to generate points. Master process and slave process belong to different communicators.

Conclusions Instructor training is essential to offer a sound teaching to our students in parallel computing and Software Engineering. OSCER is a very useful resource that can be used to improve teaching and learning quality. Proper teaching methods provide instructors with a efficient way to deliver their teaching materials. HPC has much to offer to the CS curriculum. Thanks to OSCER and its excellent staff!