Matrix Representations of Knot and Link Groups Jess May 2006 Knot Theory Knot Theory is the study of mathematical knots. A knot is a simple closed polygonal.

Slides:



Advertisements
Similar presentations
Rules of Matrix Arithmetic
Advertisements

5.1 Real Vector Spaces.
Applied Informatics Štefan BEREŽNÝ
Chapter 4 Systems of Linear Equations; Matrices
§ 3.4 Matrix Solutions to Linear Systems.
Chapter 4 Systems of Linear Equations; Matrices
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Chapter 2 Matrices Finite Mathematics & Its Applications, 11/e by Goldstein/Schneider/Siegel Copyright © 2014 Pearson Education, Inc.
2 2.3 © 2012 Pearson Education, Inc. Matrix Algebra CHARACTERIZATIONS OF INVERTIBLE MATRICES.
Eigenvalues and Eigenvectors
Goldstein/Schnieder/Lay: Finite Math & Its Applications, 9e 1 of 86 Chapter 2 Matrices.
Lower Bounds for the Ropelength of Reduced Knot Diagrams by: Robert McGuigan.
7 Linear Algebra: Matrices, Vectors, Determinants. Linear Systems
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
10.1 Gaussian Elimination Method
Subdivision Analysis via JSR We already know the z-transform formulation of schemes: To check if the scheme generates a continuous limit curve ( the scheme.
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 86 Chapter 2 Matrices.
LIAL HORNSBY SCHNEIDER
College Algebra Fifth Edition James Stewart Lothar Redlin Saleem Watson.
1 Operations with Matrice 2 Properties of Matrix Operations
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2011 Hawkes Learning Systems. All rights reserved. Hawkes Learning Systems College Algebra.
Linear Algebra – Linear Equations
Multivariate Linear Systems and Row Operations.
Matrix Solution of Linear Systems The Gauss-Jordan Method Special Systems.
259 Lecture 14 Elementary Matrix Theory. 2 Matrix Definition  A matrix is a rectangular array of elements (usually numbers) written in rows and columns.
1 1.1 © 2012 Pearson Education, Inc. Linear Equations in Linear Algebra SYSTEMS OF LINEAR EQUATIONS.
Systems and Matrices (Chapter5)
Section 4.1 Using Matrices to Represent Data. Matrix Terminology A matrix is a rectangular array of numbers enclosed in a single set of brackets. The.
System of Linear Equations Nattee Niparnan. LINEAR EQUATIONS.
Rev.S08 MAC 1140 Module 10 System of Equations and Inequalities II.
Systems of Linear Equation and Matrices
Matrix Algebra. Quick Review Quick Review Solutions.
Slide Chapter 7 Systems and Matrices 7.1 Solving Systems of Two Equations.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 5 Systems and Matrices Copyright © 2013, 2009, 2005 Pearson Education, Inc.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Unit 3: Matrices.
Algebra 3: Section 5.5 Objectives of this Section Find the Sum and Difference of Two Matrices Find Scalar Multiples of a Matrix Find the Product of Two.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 3 Polynomial and Rational Functions Copyright © 2013, 2009, 2005 Pearson Education, Inc.
Operations with Matrices
1 Chapter 3 – Subspaces of R n and Their Dimension Outline 3.1 Image and Kernel of a Linear Transformation 3.2 Subspaces of R n ; Bases and Linear Independence.
Copyright © Cengage Learning. All rights reserved. 7 Linear Systems and Matrices.
2 2.1 © 2012 Pearson Education, Inc. Matrix Algebra MATRIX OPERATIONS.
Learning Objectives for Section 4.5 Inverse of a Square Matrix
TH EDITION LIAL HORNSBY SCHNEIDER COLLEGE ALGEBRA.
SECTION 2 BINARY OPERATIONS Definition: A binary operation  on a set S is a function mapping S X S into S. For each (a, b)  S X S, we will denote the.
STROUD Worked examples and exercises are in the text Programme 5: Matrices MATRICES PROGRAMME 5.
2.5 – Determinants and Multiplicative Inverses of Matrices.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Section 2.1 Determinants by Cofactor Expansion. THE DETERMINANT Recall from algebra, that the function f (x) = x 2 is a function from the real numbers.
Unit 3: Matrices. Matrix: A rectangular arrangement of data into rows and columns, identified by capital letters. Matrix Dimensions: Number of rows, m,
STROUD Worked examples and exercises are in the text PROGRAMME 5 MATRICES.
Introduction to Symmetry Analysis Brian Cantwell Department of Aeronautics and Astronautics Stanford University Chapter 1 - Introduction to Symmetry.
Matrices, Vectors, Determinants.
Chapter 4 Systems of Linear Equations; Matrices
MTH108 Business Math I Lecture 20.
7.3 Linear Systems of Equations. Gauss Elimination
Matrices and Vector Concepts
Matrix Algebra MATRIX OPERATIONS © 2012 Pearson Education, Inc.
3.1 Introduction to Determinants
Matrix Algebra MATRIX OPERATIONS © 2012 Pearson Education, Inc.
Unit 3: Matrices
MATRICES MATRIX OPERATIONS.
Spin Models and Distance-Regular Graphs
Chapter 4 Systems of Linear Equations; Matrices
EIGENVECTORS AND EIGENVALUES
Matrix Algebra MATRIX OPERATIONS © 2012 Pearson Education, Inc.
Matrices and Determinants
Chapter 4 Systems of Linear Equations; Matrices
Presentation transcript:

Matrix Representations of Knot and Link Groups Jess May 2006 Knot Theory Knot Theory is the study of mathematical knots. A knot is a simple closed polygonal curve in R 3. Usually we think of a knot as smooth, imagining there are so many edges that we can no longer see the vertices. A link is the finite union of disjoint knots. Thus a knot is a link with only one component. Two knots or links are equivalent if one can be deformed into the other by a continuous deformation in R 3. Thus a knot or link is actually an equivalence class of such forms. A major problem in knot theory is distinguishing between different knots and links. We can make the deformations described above, but it is not obvious if two links are in fact equivalent without a lot of work. Thus, we try to come up with invariants which are independent of the diagram, which is our method of drawing links, this means that the invariant will be preserved over different diagrams of the same link. These invariants are useful for telling us if two links are different, though they usually cannot tell us that two links are the same. The Problem Two link invariants that are well known and often used are the fundamental group of the link and the Alexander polynomial. George de Rham found that for knots all representations of the fundamental group into 2 x 2 upper triangular complex matrices with determinant one were described exactly by the roots of the Alexander polynomial. I set out to extend this result to links and the multivariable Alexander polynomial. Acknowledgments My advisor, Professor Hoste has been invaluable. He suggested an amazing project and has put up with me throughout. My reader, Professor Su for being patient with me. Fundamental Group To find the Dehn presentation of the fundamental group of a link, we first label each region with r i as we did for the Alexander polynomial. Then looking at each crossing individually we find the relation by marking the left side of the under-crossing strand then setting the left region times the inverse of the right region equal across the crossing. For example looking at the ith crossing, depicted in the Alexander polynomial section, we get the relation r k r −1 q = r l r −1 p. Thus we find that the regions are the generators and the relations correspond to the crossings. In this presentation we use the convention that the outside region, r 0, is the identity element. Also, we find that at least one of the remaining generators will always be redundant. Representation Our representation will take elements in the fundamental group to a matrix group G. This is the group of 2 x 2 upper triangular complex matrices of determinant one. So, every element will look like: with m and a complex numbers. First, we define to be the product of, complex numbers corresponding to each component raised to the degree associated with that component. All arbitrary nonabelian representations of the fundamental group of a link into G must hold to certain constraints. These constraints are where we have defined the entries to be complex numbers. My Theorem Theorem: The nonabelian representations of a link group, ρ(K) into G, correspond to the roots of the multivariable Alexander polynomial, Δ(t i ). Proof: [Sketch] First we look at the Alexander polynomial. Given any crossing i we get the ith row of the Alexander matrix Similarly, looking at the fundamental link group we can once again look at the ith crossing. Every element ρ(r i ) must take the form with j indicating the region as discussed above. At the ith crossing we get the relation r k r l −1 = r p r q −1 If we apply the relation to the image in the representation we find that the diagonal entries will always hold. So if the upper right entries are equal the representation will work. So R k t v − R p t v + R q − R l = 0 Looking closely at this reveals that every crossing will result in a row of the Alexander matrix, so we now have the relation: So, to find a nonabelian representation we need to find values of the t i and R i such that not all R i = 0. This is the matrix that gives us the Alexander polynomial but we have created it from applying our representation to the fundamental group of the link. To get a nontrivial solution to this matrix equation we must be able to eliminate two columns to find the solution to the multivariable Alexander polynomial, which will give us linear independence. We know by the construction of the Dehn presentation that R 0 will always equal 0 because r 0 maps to the identity matrix, this eliminates one column. The second column can be eliminated by conjugation. So nontrivial solutions will be roots of the determinant of this matrix, which is the Alexander polynomial. So the t i must be nonzero roots of the multivariable Alexander polynomial. Furthermore, each R i will be determined by the kernel of the matrix and we can fully describe our representation. Alexander Polynomial To construct the Alexander polynomial label the regions of the diagram with r i starting with r 0 in the region surrounding the knot. Then, we can arbitrarily label the crossings 1 through n and the components K 1 through K m. We get relations from each crossing that can be combined to produce a matrix. The matrix will have rows corresponding to crossings and columns corresponding to regions, making an n × (n + 2) matrix. The relation from any ith crossing is: t j r k − t j r l + r p − r q, where the under-crossing strand is part of the jth component. Note that the r k and r l regions will always be to the left of the under-crossing strand and r l will be situated counterclockwise from r k. We can then delete any two columns that represent regions separated by one strand, h. The Alexander polynomial is the determinant of our new matrix divided by (t h − 1). The process above produces the multivariable Alexander polynomial denoted, Δ(t 1,..., t m ). The multivariable polynomial is invariant up to multiplication by ±t j ±1, depending on the choice of columns to delete and choice of diagram. The degrees of each region of this link are: d(r 0 ) = (0, 0, 0); d(r 1 ) = (−1, 0, 0) d(r 2 ) = (0,−1, 0); d(r 3 ) = (0, 0,−1) d(r 4 ) = (−1,−1, 0); d(r 5 ) = (−1, 0,−1) d(r 6 ) = (0,−1,−1); d(r 7 ) = (−1,−1,−1). Example: Looking at the Hopf link we initially get two relation and four generators. r 0 r −1 3 = r 1 r −1 2 r 0 r −1 1 = r 3 r −1 2 Then set r 0 = 1 to get r −1 3 = r 1 r −1 2 r −1 1 = r 3 r −1 2 which gives us r 1 = r −1 3 r 2. Thus we present the group as Degree We can describe the regions of the knot as having a degree, notated d(r i ) =(d 1,..., d m ). This is defined by the number of times a component is crossed, with the external region set to 0. The degree of any region in a link is defined by an m-tuple, where there are m total components. We define the outer region, r 0, to have degree 0, in all components. To find the degree of another region if we pass from the right of the strand to its left we add one to that component’s degree. If we pass left to right we subtract one.