Upsampling through Transposed Convolution and Max Unpooling
Convolution Overview
Convolution examples No Padding 𝑖 = 4, 𝑘 = 3, 𝑠 = 1 𝑎𝑛𝑑 𝑝 = 0 𝑜𝑢𝑡𝑝𝑢𝑡 = (𝑖 − 𝑘) + 1 In this example: Output size = 2 Add information about the figure + output formula
Convolution examples Half Padding No Padding 𝑖 =5, 𝑘 = 3, 𝑠 = 1 𝑎𝑛𝑑 𝑝 =1 𝑜𝑢𝑡𝑝𝑢𝑡 = 𝑖 − 𝑘 +2𝑝 + 1 In this example: Output size = 𝑖=5 It is also called Same Padding Add information about the figure + output formula
Convolution examples Full Padding No Padding Half Padding 𝑖 =5, 𝑘 = 3, 𝑠 = 1 𝑎𝑛𝑑 𝑝 =2 𝑜𝑢𝑡𝑝𝑢𝑡 = 𝑖 − 𝑘 +2𝑝 + 1 In this example: Output size = 7 Add information about the figure + output formula
Convolution as a matrix operation Convolution can be expressed as a matrix operation The convolution matrix “C” is inferred from the kernel The output is obtained by multiplying the “C” with the flattened input column vector and then reshaped
Flattening Input
Obtaining “C” kernel Slide it over the input 𝑤 0,0 𝑤 0,1 𝑤 0,2 𝑤 1,0 𝑤 1,1 𝑤 1,2 𝑤 2,0 𝑤 2,1 𝑤 2,2 Slide it over the input
Convolution as matrix multiplication 𝑜𝑢𝑡𝑝𝑢𝑡=𝐶. 𝑖 𝑇 Where 𝑖 is the flattened input feature map. From the previous two slides: Assume the convolution on 𝑖=4 𝑤𝑖𝑡ℎ 𝑘=3, 𝑠=1 : We get the output as 4x1 vector Reshape the output
Reshaping output 1 2 3 4 1 2 3 4
Pooling
Pooling Pooling is a down-sampling operation Summarizes subregions (looses information that is not invertible). Most common functions of pooling are max-pooling and average-pooling
Pooling Examples Max-Pooling Add information about the figure + output formula
Pooling Examples Average-Pooling Max-Pooling Add information about the figure + output formula
Transposed Convolution
Transposed Convolution It is the process of going in the opposite direction of convolution It is also called deconvolution or fractionally strided convolutions Expressed mathematically as the transpose of the convolution matrix “ 𝑪 𝑻 ” Note that transposed convolution does NOT guarantee to recover the original image. But returns a feature map that has the same dimensions as the original
Reshape it to the shape of input Example Consider the Convolution matrix “C” on slide 7 Consider the flattened output of convolution on slide 9 Output = 𝐶 𝑇 𝑖′ 𝐶 𝑇 is 16x4 matrix 𝑖′ is 4x1 column vector Output = 16x1 vector Reshape it to the shape of input
It is always possible to emulate transposed convolution with direct convolution using the same kernel, given the padding of the input, kernel size, and number of strides of the convolution. For this, we need to manipulate the output of the convolution (it is also the input for transposed convolution) This emulation is for the understanding of the transposed deconvolution as it is inefficient to implement due to the sparsity of the convolution matrix. It is rather implemented in practice using the backpropagation of the original convolution.
Transpose convolution as direct convolution
No padding, 𝑠=1, transposed A convolution described by 𝑠 = 1, 𝑝 = 0 and 𝑘 has an associated transposed convolution described by 𝑘 ′ = 𝑘, 𝑠 ′ =𝑠 𝑎𝑛𝑑 𝑝 ′ = 𝑘 − 1 and its output size is 𝑜′ = 𝑖′ + (𝑘 − 1)
Transposed Convolution can be expressed as Example Transposed Convolution can be expressed as
Zero padding, 𝑠=1, transposed A convolution described by 𝑠 = 1, 𝑝 and 𝑘 has an associated transposed convolution described by 𝑘 ′ = 𝑘, 𝑠 ′ =𝑠 𝑎𝑛𝑑 𝑝 ′ = 𝑘 −𝑝− 1 and its output size is 𝑜 ′ = 𝑖 ′ + 𝑘 − 1 −2𝑝
Example: Half padding, transposed Transposed Convolution can be expressed as
Example: Full padding, transposed Transposed Convolution can be expressed as
Padded input, non-unit strides, transposed A convolution described by 𝑠 , 𝑝 and 𝑘 And whose input size “𝒊” is 𝑖+2𝑝−𝑘 is a multiple of 𝑠 has an associated transposed convolution described by 𝑖 ′ , 𝑘 ′ = 𝑘, 𝑠 ′ =𝑠 𝑎𝑛𝑑 𝑝 ′ = 𝑘 −𝑝− 1 Where 𝑖′ is the stretched input obtained by adding 𝑠−1 zeros between each input unit and its output size is 𝑜 ′ = 𝑠(𝑖 ′ −1)+𝑘−2𝑝
Example: Padded input with non-unit strides Transposed Convolution can be expressed as
Max-unpooling
Max-Unpooling Max-unpooling is an upsampling procedure Max-Pooling is an non-invertible operation, instead we obtain an approximation. I order to retrieve an approximation, the locations of the maximum values obtained during the max-pooling operation are stored. The approximation is retrieved by placing each input value of the max- unpooling back to its location, and the neighboring values are set to 0.
Max-unpooling example
Additional Materials
Padded input, non-unit strides(odd), transposed A convolution described by 𝑠 , 𝑝 and 𝑘 And whose input size “𝒊” is 𝑖+2𝑝−𝑘 𝑚𝑜𝑑 𝑠 has an associated transposed convolution described by 𝑎, 𝑖 ′ , 𝑘 ′ = 𝑘, 𝑠 ′ =1 𝑎𝑛𝑑 𝑝 ′ = 𝑘 −𝑝− 1 Where 𝑖′ is the stretched input obtained by adding 𝑠−1 zeros between each input unit, and 𝑎=𝑖+2𝑝−𝑘 where “a” is the no. of zeros added to the bottom and right sides and its output size is 𝑜 ′ = 𝑠(𝑖 ′ −1)+𝑎+𝑘−2𝑝
Example: Padded input with non-unit strides(odd) Transposed Convolution can be expressed as
Dilated Convolution Inflating the kernel with spaces “𝑑” “𝑑” is a hyper-parameter, which is the number of spaces inserted between kernel elements 𝑑=1 corresponds to the normal convolution Used to cheaply increase the receptive field without increasing the kernel size
Example of Dialated convolution