Data Shackling Locality enhancement of dense numerical linear algebra codes Traversals along co-ordinate axes Data-centric reference for each statement.

Slides:



Advertisements
Similar presentations
More Vectors.
Advertisements

SIMD Single instruction on multiple data – This form of parallel processing has existed since the 1960s – The idea is rather than executing array operations.
6.1 Vector Spaces-Basic Properties. Euclidean n-space Just like we have ordered pairs (n=2), and ordered triples (n=3), we also have ordered n-tuples.
1 ILP (Recap). 2 Basic Block (BB) ILP is quite small –BB: a straight-line code sequence with no branches in except to the entry and no branches out except.
Sparse Triangular Solve in UPC By Christian Bell and Rajesh Nishtala.
Systems of Linear Equations
SOLVING SYSTEMS OF LINEAR EQUATIONS. Overview A matrix consists of a rectangular array of elements represented by a single symbol (example: [A]). An individual.
Numerical Algorithms Matrix multiplication
1cs542g-term Notes  Assignment 1 will be out later today (look on the web)
1cs542g-term Notes  Assignment 1 is out (questions?)
(Page 554 – 564) Ping Perez CS 147 Summer 2001 Alternative Parallel Architectures  Dataflow  Systolic arrays  Neural networks.
Straight Lines.
Linear Equations in Linear Algebra
ME 595M J.Murthy1 ME 595M: Computational Methods for Nanoscale Thermal Transport Lecture 9: Boundary Conditions, BTE Code J. Murthy Purdue University.
According to properties of the dot product, A ( B + C ) equals _________. A) (A B) +( B C) B) (A + B) ( A + C ) C) (A B) – ( A C) D) ( A B ) + ( A C) READING.
9.3 Linear Inequalities in Two Variables. Objective 1 Graph linear inequalities in two variables. Slide
Copyright © Cengage Learning. All rights reserved. 7.6 The Inverse of a Square Matrix.
Chapter 1: Matrices Definition 1: A matrix is a rectangular array of numbers arranged in horizontal rows and vertical columns. EXAMPLE:
Basics of Linear Algebra A review?. Matrix  Mathematical term essentially corresponding to an array  An arrangement of numbers into rows and columns.
Matrix Solution of Linear Systems The Gauss-Jordan Method Special Systems.
Vectors and the Geometry of Space
Linking graphs and systems of equations Graph of a linear equation Graphical solutions to systems.
CC02 – Parallel Programming Using OpenMP 1 of 25 PhUSE 2011 Aniruddha Deshmukh Cytel Inc.
Copyright © Cengage Learning. All rights reserved.
High Performance Computing 1 Numerical Linear Algebra An Introduction.
1 Parallel Programming using the Iteration Space Visualizer Yijun YuYijun Yu and Erik H. D'HollanderErik H. D'Hollander University of Ghent, Belgium
Equations of Lines Chapter 8 Sections
MATLAB for Engineers 4E, by Holly Moore. © 2014 Pearson Education, Inc., Upper Saddle River, NJ. All rights reserved. This material is protected by Copyright.
MA4248 Weeks 1-3. Topics Coordinate Systems, Kinematics, Newton’s Laws, Inertial Mass, Force, Momentum, Energy, Harmonic Oscillations (Springs and Pendulums)
By: David McQuilling and Jesus Caban Numerical Linear Algebra.
Chapter 3 Section 5 Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
1 7.Algorithm Efficiency What to measure? Space utilization: amount of memory required  Time efficiency: amount of time required to process the data Depends.
1 1.3 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra VECTOR EQUATIONS.
CMPS 1371 Introduction to Computing for Engineers MATRICES.
AP CALCULUS AB Chapter 2: Limits and Continuity Section 2.2: Limits Involving Infinity.
1 7.Algorithm Efficiency What to measure? Space utilization: amount of memory required  Time efficiency: amount of time required to process the data.
9-2 Translations You found the magnitude and direction of vectors. Draw translations. Draw translations in the coordinate plane.
13 B Lines in 2D and 3D. The vector AB and the vector equation of the line AB are very different things. x x x x The line AB is a line passing through.
H.Melikian/1100/041 Graphs and Graphing Utilities(1.1) Linear Equations (1.2) Formulas and Applications(1.3) Lect #4 Dr.Hayk Melikyan Departmen of Mathematics.
Slides for Parallel Programming Techniques & Applications Using Networked Workstations & Parallel Computers 2nd ed., by B. Wilkinson & M
Slide Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley.
2009/9 1 Matrices(§3.8)  A matrix is a rectangular array of objects (usually numbers).  An m  n (“m by n”) matrix has exactly m horizontal rows, and.
VECTORS (Ch. 12) Vectors in the plane Definition: A vector v in the Cartesian plane is an ordered pair of real numbers:  a,b . We write v =  a,b  and.
Local Linear Approximation for Functions of Several Variables.
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA
Math 409/409G History of Mathematics
Algebra 3 Warm – Up 1.8 Graph. y = 3x – 6.
MATRICES MATRIX OPERATIONS. About Matrices  A matrix is a rectangular arrangement of numbers in rows and columns. Rows run horizontally and columns run.
PARALLEL PROCESSING From Applications to Systems Gorana Bosic Veljko Milutinovic
Copyright © Cengage Learning. All rights reserved. 6.3 Vectors in the Plane.
Chapter 3 Section 5. Objectives 1 Copyright © 2012, 2008, 2004 Pearson Education, Inc. Graphing Linear Inequalities in Two Variables Graph linear inequalities.
09/13/2012CS4230 CS4230 Parallel Programming Lecture 8: Dense Linear Algebra and Locality Optimizations Mary Hall September 13,
Linear Algebra Libraries: BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA Shirley Moore CPS5401 Fall 2013 svmoore.pbworks.com November 12, 2012.
Matrices. Matrix - a rectangular array of variables or constants in horizontal rows and vertical columns enclosed in brackets. Element - each value in.
수학 모든 자연 법칙은 크기 사이의 관계 (Correlation) 를 나타냄 (Functional Relationship) y = f(x) x, y : quantity, Analytic Method Dependency of y on x Implicit Function F(x,
Robotic Arms and Matrices By Chris Wong and Chris Marino.
3-2 Graphs of Linear Equations in 2 Variables
Chapter 3 Section 5.
Lecture 23: Feature Selection
Vectors, Linear Combinations and Linear Independence
Solving and Graphing Linear Inequalities
1.3 Vector Equations.
Graphing Linear Functions
Matrices.
4.3 Determinants and Cramer’s Rule
11 Vectors and the Geometry of Space
“Equations and Inequalities”
Serway and Jewett Chapter 3
Presentation transcript:

Data Shackling Locality enhancement of dense numerical linear algebra codes Traversals along co-ordinate axes Data-centric reference for each statement No array copying

Workloads Basic Linear Algebra Subroutines Dot Product : xTy Sapxy : α * x + y Matrix vector : y = A * x Triangular Solve : Lx = b Matrix Multiplication : C = C + A*B Matrix Factorizations Cholesky LU QR

Why is BLAS approach not good enough? What is BLAS? Machine specific code for each BLAS Matrix factorizations are blocked Exposes a BLAS like interface BLAS not portable but matrix codes are  Automating the production of block codes not easy Too difficult an approach for compilers

What is a data shackle? One array in the program Divided into blocks Using parallel equally spaced cutting planes Order for visiting blocks of data One reference of that array is selected for each statement

Intuitively…A Data Shackle Specifies the order for touching blocks Data centric reference Which iterations of each statement are performed When that block is touched Code is generated to group those iterations together

Example – Matrix Multiplication Obtain the data shackle on C Divide C into 25 x 25 blocks Vertical and horizontal cutting planes Visited : L-R T-B C(i,j) is data-centric reference

Example Contd…

Discussion on example More optimzed code can be produced by folding the loop bounds Within a block order of instructions preserved Cannot say the same for program Not the real blocked code Low locality for A and B Compose shackles.. later

Are all shackles legal? Definitely not Program instructions reordered