Thinking in Parallel – Pipelining New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR.

Slides:



Advertisements
Similar presentations
Analyzing Parallel Performance Intel Software College Introduction to Parallel Programming – Part 6.
Advertisements

Shared-Memory Model and Threads Intel Software College Introduction to Parallel Programming – Part 2.
1 Effective Feedback to the Instructor from Online Homework Michigan State University Mark Urban-Lurain Gerd Kortemeyer.
Evaluation of the Impacts of Math Course Placement Improvement Achieved through a Summer Bridge Program John R. Reisel, Leah Rineck, Marissa Jablonski,
“The study of algorithms is the cornerstone of computer science.” Algorithms Winter 2012.
Problem Solving and Algorithm Design
Chapter 6 Problem Solving and Algorithm Design. 6-2 Chapter Goals Determine whether a problem is suitable for a computer solution Describe the computer.
This presentation and its materials are based upon work supported by the National Science Foundation under Cooperative Agreement Number HRD Any.
Thinking in Parallel – Task Decomposition New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR.
Reference: Message Passing Fundamentals.
INTEL CONFIDENTIAL Why Parallel? Why Now? Introduction to Parallel Programming – Part 1.
How to Assess Student Learning in Arts Partnerships Part II: Survey Research Revised April 2, 2012 Mary Campbell-Zopf, Ohio Arts Council
© 2009 WGBH Educational Foundation. Leading Hands-On Science Activities.
INTEL CONFIDENTIAL Parallel Decomposition Methods Introduction to Parallel Programming – Part 2.
INTEL CONFIDENTIAL Finding Parallelism Introduction to Parallel Programming – Part 3.
Recognizing Potential Parallelism Intel Software College Introduction to Parallel Programming – Part 1.
Multi-core Programming: Basic Concepts. Copyright © 2006, Intel Corporation. All rights reserved. Intel and the Intel logo are trademarks or registered.
Copyright © 2008, Intel Corporation. All rights reserved. Intel, the Intel logo, Intel Education Initiative, and Intel Teach Program are trademarks of.
INTEL CONFIDENTIAL Predicting Parallel Performance Introduction to Parallel Programming – Part 10.
Copyright © 2010 Intel Corporation. All rights reserved. Intel and Intel  Education are trademarks or registered trademarks of Intel Corporation or its.
Performance Measurement. A Quantitative Basis for Design n Parallel programming is an optimization problem. n Must take into account several factors:
Copyright © 2008, Intel Corporation. All rights reserved. Intel, the Intel logo, Intel Education Initiative, and Intel Teach Program are trademarks of.
Recognizing Potential Parallelism Introduction to Parallel Programming Part 1.
Slide # 1 Programs of the Intel Education Initiative are funded by the Intel Foundation and Intel Corporation. Copyright © 2007 Intel Corporation. All.
Copyright © 2009 Intel Corporation. All rights reserved. Intel, the Intel logo, Intel Education Initiative, and the Intel Teach Program are trademarks.
Copyright © 2006, Intel Corporation. All rights reserved. Intel® Teach Program Thinking with Technology Course © 2007 Intel Corporation. All rights reserved.
Copyright © 2009 Intel Corporation. All rights reserved. Intel, the Intel logo, Intel Education Initiative, and the Intel Teach Program are trademarks.
Computer Aided Design By Brian Nettleton This material is based upon work supported by the National Science Foundation under Grant No Any opinions,
Group roles Group Leader You have been assigned the role of the group leader. This job is very important and crucial to the group. As the group leader,
INTEL CONFIDENTIAL Shared Memory Considerations Introduction to Parallel Programming – Part 4.
1 "Workshop 31: Developing a Hands-on Undergraduate Parallel Programming Course with Pattern Programming SIGCSE The 44 th ACM Technical Symposium.
Thinking in Parallel – Implementing In Code New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR.
CS 1308 Computer Literacy and the Internet. Objectives In this chapter, you will learn about:  The components of a computer system  Putting all the.
Copyright © 2008 Intel Corporation. All rights reserved. Intel, the Intel logo, Intel Education Initiative, and the Intel Teach Program are trademarks.
Updated September 2011 Medical Applications in Nanotechnology Nano Gold Sensors Lab.
Today’s Agenda 1.Collect Pre-Lab 3 2.Alice Programming Assignment 3.Built-in Functions 4.Expressions 5.Control Structure 6.Assign pair programming teams.
Programs of the Intel® Education Initiative are funded by the Intel Foundation and Intel. Copyright © 2007, Intel Corporation. All rights reserved. Intel,
Copyright © 2008 Intel Corporation. All rights reserved. Intel, the Intel logo, Intel Education Initiative, and the Intel Teach Program are trademarks.
Copyright © 2008, Intel Corporation. All rights reserved. Intel, the Intel logo, Intel Education Initiative, and Intel Teach Program are trademarks of.
Thinking in Parallel – Domain Decomposition New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR.
Copyright © Curt Hill SIMD Single Instruction Multiple Data.
Leader Interviews Name, PhD Title, Organization University This project is funded by the National Science Foundation (NSF) under award numbers ANT
Today’s Plan: -Mineral Project Debrief -Rock Correlation 01/19/11 Rock Correlations#3 Learning Target: - I can create a model showing the rock layers in.
 Wind Power TEAK – Traveling Engineering Activity Kits Partial support for the TEAK Project was provided by the National Science Foundation's Course,
Thinking in Parallel - Introduction New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR.
1 Parallel Processing Fundamental Concepts. 2 Selection of an Application for Parallelization Can use parallel computation for 2 things: –Speed up an.
Copyright © 2009 Intel Corporation. All rights reserved. Intel, the Intel logo, Intel Education Initiative, and the Intel Teach Program are trademarks.
CS 286 Computer Organization and Architecture
Math Field Day Meeting #2 October 28, 2015
Parallel Programming By J. H. Wang May 2, 2017.
The Binary Number System
Discussion and Conclusion
CISC AND RISC SYSTEM Based on instruction set, we broadly classify Computer/microprocessor/microcontroller into CISC and RISC. CISC SYSTEM: COMPLEX INSTRUCTION.
Nanotechnology & Society
Nanotechnology & Society
Title of session For Event Plus Presenters 12/5/2018.
Geo-Needs An NSF funded project to explore barriers and opportunities for enhancing geoscience instruction at two-year colleges and minority-serving institutions.
Nanotechnology & Society
ე ვ ი ო Ш Е Т И О А С Д Ф К Ж З В Н М W Y U I O S D Z X C V B N M
Amdahl's law.
Nanotechnology & Society
CS 286 Computer Organization and Architecture
Reminder to hand in your "Rational Exponents" Assignment
BLOW MINDS! TEACH PHYSICS
BLOW MINDS! TEACH MATH TEACHING: WORTH IT IN MORE WAYS THAN YOU MIGHT THINK... Most people underestimate teacher salaries by $10,000-$30,000 Most teaching.
BLOW MINDS! TEACH CHEMISTRY
This material is based upon work supported by the National Science Foundation under Grant #XXXXXX. Any opinions, findings, and conclusions or recommendations.
BLOW MINDS! TEACH SCIENCE
Serena Alderson Manager of School Partnerships Carnegie Learning, Inc
Speaker’s Name, SAP Month 00, 2017
Presentation transcript:

Thinking in Parallel – Pipelining New Mexico Supercomputing Challenge in partnership with Intel Corp. and NM EPSCoR

Copyrights and Acknowledgments Intel and the Intel logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States or other countries. New Mexico EPSCoR Program is funded in part by the National Science Foundation award # and the State of New Mexico. Any opinions, findings, conclusions, or recommendations expressed in the material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. For questions about the Supercomputing Challenge, a 501(c)3 organization, contact us at: challenge.nm.org

Agenda Definition review of pipelining Steps involved in pipelining Conceptual example of pipelining Hands-on pipelining parallel activities

Methodology Domain decomposition – Used when the same operation is performed on a large number of similar data items. Task decomposition – Used when some different operations can be performed at the same time. Pipelining – Used when there are sequential operations on a large amount of data.

Pipelining Special kind of task decomposition “Assembly line” parallelism Example: 3D rendering in computer graphics

Processing 1 Data Set (Step Definitions)

Processing 1 Data Set (Step 1) Processing one data set in 4 steps

Processing 1 Data Set (Step 2)

Processing 1 Data Set (Step 3)

Processing 1 Data Set (Step 4)

Processing 2 Data Sets (Step 1) Using pipelining to process 2 data sets in 5 steps

Processing 2 Data Sets (Step 2)

Processing 2 Data Sets (Step 3)

Processing 2 Data Sets (Step 4)

Processing 2 Data Sets (Step 5)

Processing 5 Data Sets (Step 1)

Processing 5 Data Sets (Step 2)

Processing 5 Data Sets (Step 3)

Processing 5 Data Sets (Step 4)

Processing 5 Data Sets (Step 5)

Processing 5 Data Sets (Step 6)

Processing 5 Data Sets (Step 7)

Processing 5 Data Sets (Step 8)

Activity 1 – Domain Decomposition Your team will explore how to use domain decomposition to accomplish the job of stuffing, sealing, addressing, stamping, and mailing multiple envelopes. *See handout for detailed instructions

Activity 1 – Setup 1.Cut up sample address table, giving each processor a portion of the addresses. 2.Divide up the envelopes, stamps, and address labels so that each processor gets has the materials needed for their assigned sample addresses. 3.Give a pen or pencil to each processor. 4.Designate an area of the room to be the mailbox.

Activity 1 – Execution 1.After all of the processors are given the instructions, start timer. 2.Each processor should perform these steps as quickly as possible: a.Fold scrap paper. b.Stuff paper into envelope. c.Pretend to seal envelope (so we can re-use them later). d.Fix a stamp to the envelope. e.Complete an address label, for the next sample address. f.Fix address label to the envelope. g.Place envelope in the mail box. h.Repeat steps a - g for all sample addresses. 3.Stop timer.

Activity 1 – Debrief 1.Which task(s) took the longest? 2.Which task(s) took the shortest? 3.Were any processors idle for periods of time?

Activity 2 – Task Decomposition and Pipelining Your team will explore how to use a pipeline task decomposition of the job of folding, stuffing, sealing, addressing, stamping, and mailing multiple envelopes.

Activity 2 – Setup 1.Number the four team members as processor 1, 2, 3, and 4. 2.Give 16 pieces of scrap paper and 16 envelopes to processor 1. 3.Give 16 stamps to processor 2. 4.Give processor 3 sample addresses, 16 address labels, and a pen or pencil.

Activity 2 – Execution 1.After all of the processors are given the instructions, start timer. 2.Each processor should perform their required steps as quickly as possible, forwarding their completed work to the next processor, and repeating until all envelopes are stuffed, stamped, addressed, and mailed: 1.Stop Timer Processor 1 Processor 2Processor 3Processor 4 – Fold scrap paper. – Stuff paper into envelope. – Pretend to seal envelope (so we can re-use them later). – Fix a stamp to the envelope. – Complete an address label, for the next sample address. – Fix address label to the envelope. – Place envelope in the mail box.

Activity 2 – Debrief 1.Which task(s) took the longest? 2.Which task(s) took the shortest? 3.Were any processors idle for periods of time? 4.Was this approach faster than activity 1? a.If so, discuss reasons why. b.If not, discuss reasons why. 5.Do you think it would be possible to speed up this approach? How?

Optional Activity – Task Decomposition and Pipelining If time permits, have teams redistribute the tasks among the processors, to decrease idle time and overall time required for completion. For example: Multiple processors assigned to the same task. Multiple tasks assigned to one processor. Note: To ensure pipeline processing, a processor may not be assigned every task.