Presentation is loading. Please wait.

Presentation is loading. Please wait.

Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)

Similar presentations


Presentation on theme: "Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)"— Presentation transcript:

1 Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)

2 Overview Introduction to neural networks Back Propagated (BP) neural network Image compression using BP neural network Comparison with existing image compression techniques

3 What is a Neural Network? An artificial neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. Can perform "intelligent" tasks similar to those performed by the human brain.

4 Neural Network Structure A neural network is an interconnected group of neurons A Simple Neural Network

5 Neural Network Structure An Artificial Neuron

6 Activation Function Depending upon the problem variety of Activation function is used:  Linear Activation function like step function  Nonlinear Activation function like sigmoid function

7 Typical Activation Functions F(x) = 1 / (1 + e -k ∑ (w i x i ) ) Shown for k = 0.5, 1 and 10 Using a nonlinear function which approximates a linear threshold allows a network to approximate nonlinear functions using only small number of nodes.

8 What can a Neural Net do? Compute a known function Approximate an unknown function Pattern Recognition Signal Processing Learn to do any of the above

9 Learning Neural Networks Learning/Training Neural Networks means adjustment of the weights of the connections such that the cost function is minimized. Cost function: Ĉ = (∑(x i – x i ’) 2 )/N Where x i’ s are desired output and x i ’s are the output of the neural network.

10 Learning Neural Network: Back Propagation Main Idea: distribute the error function across the hidden layers, corresponding to their effect on the output Works on feed-forward networks

11 Back Propagation Repeat: Choose training pair and copy it to input layer Choose training pair and copy it to input layer Cycle that pattern through the net Cycle that pattern through the net Calculate error derivative between output activation and target output Calculate error derivative between output activation and target output Back propagate the summed product of the weights and errors in the output layer to calculate the error on the hidden units Back propagate the summed product of the weights and errors in the output layer to calculate the error on the hidden units Update weights according to the error on that unit Update weights according to the error on that unit Until error is low or the net settles

12 Back Propagation: Sharing the Blame We update the weights of each connection in the neural network. Done using Delta Rule.

13 Delta Rule ΔW ji = η * δ j * x i δ j = (t j – y j ) *f’(h j ) Where η is the learning rate of the neural network, t j and y j are targeted and actual output of the j th neuron, h j is the weighted sum of the neuron’s inputs and f’ is the derivate of the activation function f.

14 Delta Rule for Multilayer Neural Networks Problem with Multilayer Network is that we don’t know the targeted output value for the Hidden layer neurons. This can be solved by a trick: δi = ∑(δk*Wki)*f’(hi) The first factor in parenthesis involving the sum over k is an approximation to (t i -a i ) for the hidden layers when we don't know t i.

15 Image Compression using BP Neural Network Future of Image Coding(analogous to our visual system) Narrow Channel K-L transform K-L transform The entropy coding of the state vector h i ’s at the hidden layer.

16 Image Compression using continued… A set of image samples is used to train the network. This is equivalent to compressing the input into the narrow channel and then reconstructing the input from the hidden layer.

17 Image Compression using continued… Transform coding with multilayer Neural Network: The image to be subdivided into non-overlapping blocks of n x n pixels each. Such block represents N- dimensional vector x, N = n x n, in N- dimensional space. Transformation process maps this set of vectors into y=W (input) output=W -1 y

18 Image Compression continued… The inverse transformation need to reconstruct original image with minimum of distortions.

19 Analysis The bit rate can be defined as follows: (mKT + NKt)/ mN bits/pixel where input images are divided into m blocks of N pixels, t stand for the number of bits used to encode each hidden neuron output and T for each coupling weight from the hidden layer to the output layer. NKt is small and can be ignored. where input images are divided into m blocks of N pixels, t stand for the number of bits used to encode each hidden neuron output and T for each coupling weight from the hidden layer to the output layer. NKt is small and can be ignored.

20 Output of this Compression Algorithm

21 Other Neural Network Techniques Hierarchical back-propagation neural network Predictive Coding Depending upon weight function we have Hebbian learning-based image compression W i (t + 1)= {W(t) + αh i (t)X(t)}/||W i (t) + αh i (t)X(t)||

22 References Neural networks Wikipedia (http://en.wikipedia.org/wiki/Neural_network) http://en.wikipedia.org/wiki/Neural_network Ivan Vilovic' : An Experience in Image Compression Using Neural Networks Robert D. Dony, Simon Haykin: Neural Network Approaches to Image Compression Constantino Carlos Reyes-Aldasoro, Ana Laura Aldeco: Image Segmentation and compression using Neural Networks Image compression with neural networks - A survey --J. Jiang*

23 Questions??

24 Thank You


Download ppt "Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)"

Similar presentations


Ads by Google