Download presentation
Presentation is loading. Please wait.
Published byAleesha Anderson Modified over 5 years ago
1
Upsampling through Transposed Convolution and Max Unpooling
2
Convolution Overview
3
Convolution examples No Padding ๐ = 4, ๐ = 3, ๐ = 1 ๐๐๐ ๐ = 0
๐๐ข๐ก๐๐ข๐ก = (๐ โ ๐) + 1 In this example: Output size = 2 Add information about the figure + output formula
4
Convolution examples Half Padding No Padding
๐ =5, ๐ = 3, ๐ = 1 ๐๐๐ ๐ =1 ๐๐ข๐ก๐๐ข๐ก = ๐ โ ๐ +2๐ + 1 In this example: Output size = ๐=5 It is also called Same Padding Add information about the figure + output formula
5
Convolution examples Full Padding No Padding Half Padding
๐ =5, ๐ = 3, ๐ = 1 ๐๐๐ ๐ =2 ๐๐ข๐ก๐๐ข๐ก = ๐ โ ๐ +2๐ + 1 In this example: Output size = 7 Add information about the figure + output formula
6
Convolution as a matrix operation
Convolution can be expressed as a matrix operation The convolution matrix โCโ is inferred from the kernel The output is obtained by multiplying the โCโ with the flattened input column vector and then reshaped
7
Flattening Input
8
Obtaining โCโ kernel Slide it over the input ๐ค 0,0 ๐ค 0,1 ๐ค 0,2 ๐ค 1,0
๐ค 1,1 ๐ค 1,2 ๐ค 2,0 ๐ค 2,1 ๐ค 2,2 Slide it over the input
9
Convolution as matrix multiplication
๐๐ข๐ก๐๐ข๐ก=๐ถ. ๐ ๐ Where ๐ is the flattened input feature map. From the previous two slides: Assume the convolution on ๐=4 ๐ค๐๐กโ ๐=3, ๐ =1 : We get the output as 4x1 vector Reshape the output
10
Reshaping output 1 2 3 4 1 2 3 4
11
Pooling
12
Pooling Pooling is a down-sampling operation
Summarizes subregions (looses information that is not invertible). Most common functions of pooling are max-pooling and average-pooling
13
Pooling Examples Max-Pooling
Add information about the figure + output formula
14
Pooling Examples Average-Pooling Max-Pooling
Add information about the figure + output formula
15
Transposed Convolution
16
Transposed Convolution
It is the process of going in the opposite direction of convolution It is also called deconvolution or fractionally strided convolutions Expressed mathematically as the transpose of the convolution matrix โ ๐ช ๐ป โ Note that transposed convolution does NOT guarantee to recover the original image. But returns a feature map that has the same dimensions as the original
17
Reshape it to the shape of input
Example Consider the Convolution matrix โCโ on slide 7 Consider the flattened output of convolution on slide 9 Output = ๐ถ ๐ ๐โฒ ๐ถ ๐ is 16x4 matrix ๐โฒ is 4x1 column vector Output = 16x1 vector Reshape it to the shape of input
18
It is always possible to emulate transposed convolution with direct convolution using the same kernel, given the padding of the input, kernel size, and number of strides of the convolution. For this, we need to manipulate the output of the convolution (it is also the input for transposed convolution) This emulation is for the understanding of the transposed deconvolution as it is inefficient to implement due to the sparsity of the convolution matrix. It is rather implemented in practice using the backpropagation of the original convolution.
19
Transpose convolution as direct convolution
20
No padding, ๐ =1, transposed
A convolution described by ๐ = 1, ๐ = 0 and ๐ has an associated transposed convolution described by ๐ โฒ = ๐, ๐ โฒ =๐ ๐๐๐ ๐ โฒ = ๐ โ 1 and its output size is ๐โฒ = ๐โฒ + (๐ โ 1)
21
Transposed Convolution can be expressed as
Example Transposed Convolution can be expressed as
22
Zero padding, ๐ =1, transposed
A convolution described by ๐ = 1, ๐ and ๐ has an associated transposed convolution described by ๐ โฒ = ๐, ๐ โฒ =๐ ๐๐๐ ๐ โฒ = ๐ โ๐โ 1 and its output size is ๐ โฒ = ๐ โฒ + ๐ โ 1 โ2๐
23
Example: Half padding, transposed
Transposed Convolution can be expressed as
24
Example: Full padding, transposed
Transposed Convolution can be expressed as
25
Padded input, non-unit strides, transposed
A convolution described by ๐ , ๐ and ๐ And whose input size โ๐โ is ๐+2๐โ๐ is a multiple of ๐ has an associated transposed convolution described by ๐ โฒ , ๐ โฒ = ๐, ๐ โฒ =๐ ๐๐๐ ๐ โฒ = ๐ โ๐โ 1 Where ๐โฒ is the stretched input obtained by adding ๐ โ1 zeros between each input unit and its output size is ๐ โฒ = ๐ (๐ โฒ โ1)+๐โ2๐
26
Example: Padded input with non-unit strides
Transposed Convolution can be expressed as
27
Max-unpooling
28
Max-Unpooling Max-unpooling is an upsampling procedure
Max-Pooling is an non-invertible operation, instead we obtain an approximation. I order to retrieve an approximation, the locations of the maximum values obtained during the max-pooling operation are stored. The approximation is retrieved by placing each input value of the max- unpooling back to its location, and the neighboring values are set to 0.
29
Max-unpooling example
30
Additional Materials
31
Padded input, non-unit strides(odd), transposed
A convolution described by ๐ , ๐ and ๐ And whose input size โ๐โ is ๐+2๐โ๐ ๐๐๐ ๐ has an associated transposed convolution described by ๐, ๐ โฒ , ๐ โฒ = ๐, ๐ โฒ =1 ๐๐๐ ๐ โฒ = ๐ โ๐โ 1 Where ๐โฒ is the stretched input obtained by adding ๐ โ1 zeros between each input unit, and ๐=๐+2๐โ๐ where โaโ is the no. of zeros added to the bottom and right sides and its output size is ๐ โฒ = ๐ (๐ โฒ โ1)+๐+๐โ2๐
32
Example: Padded input with non-unit strides(odd)
Transposed Convolution can be expressed as
33
Dilated Convolution Inflating the kernel with spaces โ๐โ
โ๐โ is a hyper-parameter, which is the number of spaces inserted between kernel elements ๐=1 corresponds to the normal convolution Used to cheaply increase the receptive field without increasing the kernel size
34
Example of Dialated convolution
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.