October, 1998DARPA / Melamed / Singh1 Parallelization of Search Algorithms for Modeling QTES Processes Joshua Kramer and Santokh Singh Rutgers University.

Slides:



Advertisements
Similar presentations
Component Analysis (Review)
Advertisements

Fast Algorithms For Hierarchical Range Histogram Constructions
Branch & Bound Algorithms
Parallel Scheduling of Complex DAGs under Uncertainty Grzegorz Malewicz.
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
Searching for the Minimal Bézout Number Lin Zhenjiang, Allen Dept. of CSE, CUHK 3-Oct-2005
Maths for Computer Graphics
CSCI 317 Mike Heroux1 Sparse Matrix Computations CSCI 317 Mike Heroux.
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
Dynamic Programming Reading Material: Chapter 7..
Sparse Matrix Algorithms CS 524 – High-Performance Computing.
Cluster Analysis.  What is Cluster Analysis?  Types of Data in Cluster Analysis  A Categorization of Major Clustering Methods  Partitioning Methods.
Deterministic Length Reduction: Fast Convolution in Sparse Data and Applications Written by: Amihood Amir, Oren Kapah and Ely Porat.
Clustering In Large Graphs And Matrices Petros Drineas, Alan Frieze, Ravi Kannan, Santosh Vempala, V. Vinay Presented by Eric Anderson.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Dynamic Programming Reading Material: Chapter 7 Sections and 6.
Tutorial 10 Iterative Methods and Matrix Norms. 2 In an iterative process, the k+1 step is defined via: Iterative processes Eigenvector decomposition.
Orthogonality and Least Squares
1 Matrix Addition, C = A + B Add corresponding elements of each matrix to form elements of result matrix. Given elements of A as a i,j and elements of.
6 6.3 © 2012 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
October, 1998DARPA / B. Melamed1 High-Fidelity Real-Time Modeling and Simulation of Network Traffic Processes Khosrow Sohraby Computer Science Telecommunications.
Clustering with Bregman Divergences Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, Joydeep Ghosh Presented by Rohit Gupta CSci 8980: Machine Learning.
Cardinality & Sorting Networks. Cardinality constraint Appears in many practical problems: scheduling, timetabling etc’. Also takes place in the Max-Sat.
Dynamic Programming Introduction to Algorithms Dynamic Programming CSE 680 Prof. Roger Crawfis.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
CS Data Structures Chapter 2 Arrays and Structures.
Adaptive Signal Processing

Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Exercise problems for students taking the Programming Parallel Computers course. Janusz Kowalik Piotr Arlukowicz Tadeusz Puzniakowski Informatics Institute.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
 Row and Reduced Row Echelon  Elementary Matrices.
Chapter 5: The Orthogonality and Least Squares
Review of Matrices Or A Fast Introduction.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
Discrete Mathematical Structures (Counting Principles)
Simplex method (algebraic interpretation)
AN ORTHOGONAL PROJECTION
Elementary Linear Algebra Anton & Rorres, 9th Edition
Author: B. C. Bromley Presented by: Shuaiyuan Zhou Quasi-random Number Generators for Parallel Monte Carlo Algorithms.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
Learning Spectral Clustering, With Application to Speech Separation F. R. Bach and M. I. Jordan, JMLR 2006.
Algorithms 2005 Ramesh Hariharan. Algebraic Methods.
4 © 2012 Pearson Education, Inc. Vector Spaces 4.4 COORDINATE SYSTEMS.
Information and Coding Theory Cyclic codes Juris Viksna, 2015.
Meeting 18 Matrix Operations. Matrix If A is an m x n matrix - that is, a matrix with m rows and n columns – then the scalar entry in the i th row and.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 29 Nov 11, 2005 Nanjing University of Science & Technology.
2 2.1 © 2012 Pearson Education, Inc. Matrix Algebra MATRIX OPERATIONS.
Matrices Section 2.6. Section Summary Definition of a Matrix Matrix Arithmetic Transposes and Powers of Arithmetic Zero-One matrices.
 The Sinkhorn-Knopp Algorithm and Fixed Point Problem  Solutions for 2 × 2 and special n × n cases  Circulant matrices for 3 × 3 case  Ongoing work.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University.
1 An Arc-Path Model for OSPF Weight Setting Problem Dr.Jeffery Kennington Anusha Madhavan.
Digital Communications I: Modulation and Coding Course Term Catharina Logothetis Lecture 9.
Stats & Summary. The Woodbury Theorem where the inverses.
Approximating Derivatives Using Taylor Series and Vandermonde Matrices.
Compression for Fixed-Width Memories Ori Rottenstriech, Amit Berman, Yuval Cassuto and Isaac Keslassy Technion, Israel.
LINEAR MODELS AND MATRIX ALGEBRA
Chapter 5 Chapter Content 1. Real Vector Spaces 2. Subspaces 3. Linear Independence 4. Basis and Dimension 5. Row Space, Column Space, and Nullspace 6.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Hidden Markov Models BMI/CS 576
Computation of the solutions of nonlinear polynomial systems
LINEAR MODELS AND MATRIX ALGEBRA
Hidden Markov Models Part 2: Algorithms
Presentation transcript:

October, 1998DARPA / Melamed / Singh1 Parallelization of Search Algorithms for Modeling QTES Processes Joshua Kramer and Santokh Singh Rutgers University Faculty of Management Dept. of MSIS 94 Rockafeller Rd. Piscataway, NJ Benjamin Melamed Rutgers University Faculty of Management Dept. of MSIS 94 Rockafeller Rd. Piscataway, NJ DARPA/ITO BAA AON F316

October, 1998DARPA / Melamed / Singh2 Minimize over the space of probability vectors where : the maximal autocorrelation lag : the fixed weighting coefficients : QTES model autocorrelation function : empirical autocorrelation function. MODELING OPTIMIZATION PROBLEM

October, 1998DARPA / Melamed / Singh3 Parallelization of computations by partitioning the vector space into subspces Assume m processors with a designated processor to initiate the computation distribute the task of minimization to the m processors compute in parallel the minimization within each subspace combine the results by the designated processor COMPUTATIONAL SPEEDUP

October, 1998DARPA / Melamed / Singh4 ALGORITHM PARALLELIZATION H1H1 H2H2 H3H3 H4H4 SPACE H argmin f (p) p є H Processor 4Processor 3Processor1 argmin f (p) p є H 4 argmin f (p) p є H 3 argmin f (p) p є H 1 argmin f (p) p є H 2 Processor2

October, 1998DARPA / Melamed / Singh5 MATHEMATICAL PROBLEM FORMULATION objective function : quantization factor : non-negative integer in position i quantized probability vectors where : quantization level

October, 1998DARPA / Melamed / Singh6 Partition the quantized vector space of cardinality into subspaces, such that cardinality of is Two partitioning methods interleaving method segmentation method WORK PARTITIONING METHODS for all

October, 1998DARPA / Melamed / Singh7 Facts it is known how to enumerate recursively all vectors in it is, however, hard to map an index i to the associated vector Solution ( interleaving ) let be the index set of define example : m = 2 Features very simple to implement by skipping skipping is wasted work that slows down the algorithm OVERVIEW OF INTERLEAVING METHOD

October, 1998DARPA / Melamed / Singh8 Notify each processor j of the starting vector for its subspace OVERVIEW OF SEGMENTATION METHOD H1H1 H2H2 HmHm A mapping connects each vector index to the corresponding vector for enumeration and computation of optimal vectors This converts operations in the vector domain to operations in the integer domain can easily find the indices of first and last vectors in each subspace can readily enumerate vectors via their indices

October, 1998DARPA / Melamed / Singh9 Definition a circulant matrix is one composed of vectors in which each row is obtained by circular right shift of the previous row Properties - Columns can also be formed by circular shifts - every row can be generated from the first row or first column Examples basic circulant = , DIGRESSION: CIRCULANT MATRICES

October, 1998DARPA / Melamed / Singh10 Definition a similarity class is a set of circulant matrices, which have the same elements, irrespective of their positions Example: n = 5, k = S 1 = S 2 = DIGRESSION: SIMILARITY CLASSES

October, 1998DARPA / Melamed / Singh11 SEGMENTATION ALGORITHM mapping vector index i similarity class S t circulant C s assignment of vectors v i to subspace H j assignment enumeration enumeration of vector v i associated with circulant C s circulant C s

October, 1998DARPA / Melamed / Singh12 Number of vectors in similarity class is calculated via the multinomial formula MAPPING INDICES TO CIRCULANTS where : multiplicity of the element j in vector : number of distinct elements in vector Example: for n = 5, k = 5,

October, 1998DARPA / Melamed / Singh13 Vector is identified as belonging to circulant matrix ( ) MAPPING INDICES TO CIRCULANTS (Cont.) Vector is classified into similarity class if by the mapping

October, 1998DARPA / Melamed / Singh14 Let be a decomposition vector at quantization level k, such that Example: k = 5 d 0 = { 5 }; d 1 = { 4,1 }; d 2 = { 3, 2 }; d 3 = { 3, 1, 1 }; d 4 = { 2, 2, 1 }; d 5 = { 2, 1, 1, 1 }; d 6 = { 1, 1, 1, 1 }; VECTOR ENUMERATION WITHIN CIRCULANTS

October, 1998DARPA / Melamed / Singh15 The j -th element of the vector is defined by : number of elements in the t -th decomposition vector VECTOR ENUMERATION WITHIN CIRCULANTS : sum of the nonzero off-diagonal elements : = 1, if i = j ; = 0, otherwise ( Kronecker’s delta ) : u -th element of the t -th decomposition vector : (i,j) -th element of the basic circulant matrix

October, 1998DARPA / Melamed / Singh16 SUMMARY OF SEGMENTATION METHOD Decomposition for parameters n and k Construction of circulants C s Construction of Similarity Classes S t Formation of subspaces H j H1 H1 H2H2 H3H3 H4H4 S1S1 S2S2 S3S3 S4S4 S5S5 S6S6 S7S7

October, 1998DARPA / Melamed / Singh17 FEATURES OF SEGMENTATION ALGORITHM Versatile arithmetic on enumeration of vector indices i permits fast enumeration of vectors in each subspace may also be effectively used as an alternative interleaving method Fast algorithm makes it possible to quickly enumerate the first and the subsequent vectors belonging to each subspace, directly from vector indices Space saving since a vector belonging to a subspace can be enumerated as and when needed, no storage is required