Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Relationship between Deep Learning and Brain Function

Similar presentations


Presentation on theme: "The Relationship between Deep Learning and Brain Function"— Presentation transcript:

1 The Relationship between Deep Learning and Brain Function
Sukhjinder Nahal, Jamal Wilson, Abel Renteria, Nusseir Moath

2 What is Deep Learning? Deep Learning is a branch of the broader field known as machine learning. It uses algorithms that mimic the structure and function of the human brain as artificial neural networks. E.g.- CNN (Convolutional Neural Networks) Deep learning is a relative new technology (Around 2010) Deep learning and Machine Learning are both subsets of the larger field of Artificial Intelligence

3 Scalability of Deep Learning
Deep learning can come up with better results when it is given larger sets of data as an input. Older learning algorithms don’t scale as well. We have access to large datasets now due to the decreasing cost of storage. This is one of the reasons that deep learning has become a feasible technology.

4 Supervised vs Unsupervised Learning
Supervised training requires a large amount of data that is labelled for training Unsupervised training on the other hand does not require labeled data. It can sort and classify the data it is given without human intervention. CNN (Convolutional Neural Networks) are trained using the supervised method. The brain’s architecture of neurons is comparable to convolutional neural networks but they differ in the way they learn. Deep learning can learn in two different methods supervised or unsupervised CNNs require a large set of data that is labelled to create feature maps

5 CNN: Convolutional Layer
Performs most of the heavy computation In this example, the CNN is trying to determine if the image is an X. CNNs evaluate multiple areas for matches in every possible combination since it does not know where the feature will match 1s Indicate Strong match, -1 indicate no match Math behind process involves multiplying the section being evaluated against values from a figure that has been learned Then divide that number by the number of pixels Output array for this example results in all ones which indicate an exact match Figure on the right Example- values are created for a diagonal line (Feature) Exact matches are indicated as 1s Similar matches are close to 1 such as the .77

6 CNN: Pooling Layer Responsible for down sampling the inputted image.
Placed in between convolutional layers Decreasing the resolution of input reduces the amount of computation. Example- A 7 Megapixel image will create 7 million data points (1 Megapixel = 1 Million Pixels) which will take a lot of computation Takes the max value from a section In this case 2x2 pixel It takes the max value out of 4 pixels thus reducing the size by a fourth while retaining the important information

7 CNN: Rectified Linear Unit (ReLU)
Helps in computational costs when processing images. Turns all negative values into zero. Output of a ReLU layer is the same size as what is put into it, but with all negative values removed.

8 CNN: Fully Connected Layer
After a few repetitions of Convolution and pooling a fully connected layer is created to connect the neurons between the current layer and the previous layer. In this example the inputted image is compared to an X and O The final result is X with a high probability (.92) The neurons in the artificial network give weights depending on the features using a process called backpropagation

9 Backpropagation Every image that is processed receives a weight and an error Error is calculated by subtracting the right answer which would be a 1 by the generated weight. This process adjusts the features and weights up and down to see how the error changes. The amount of adjustment is dependent on the size of the error, so a large error will require a large adjustment and a small error will require a small adjustment. In the publication “Towards an integration of deep learning and neuroscience” they hypothesised that the brain might use genetic pre-specification circuits, local optimization or a host of proposed circuit structures that would allow it to perform backpropagation.

10 Visual Cortex The Visual Cortex functions similarly to how a CNN works
Comprised of 6 Layers Each layer passes the information from one layer to next and extracts features Similar to the Convolutional Layer which receives the input and creates feature maps V1- Processes the input V2- Relays the inputs to the other regions V3- Extracts the form of the object V4- Extracts the color and form of an object V5- Motion

11 Differences Between the Brain and CNN
Section A- Demonstrates how the supervised learning occurs using labeled data. Section B- Demonstrates how the brain uses supervised and unsupervised learning. Section C- shows how information enters the brain from the sensory inputs and the outputs that are generated as a result. Section B Human Brain can make connections internally between objects and things that a computer cannot. Computers can only identify and make connections based on the data it has learned Section C The brain has different sections that specialize in a given function(Hippocampus, PFC, Thalamus, Cerebellum, Basal Ganglia). This is similar to the way a specialized computer algorithm can be optimized for a specific function thus increasing its efficiency

12 Memory Memory or storage is a vital part in the learning process of both the human brain and in deep learning Three types of memory are need for learning Content Addressable Memory Working Memory Implicit Memory Memory in deep learning networks must store input data, weight parameters, and other computed data Deep learning uses GPU DRAM and external DRAM The brain on the other hand uses neurons and synapses throughout the brain as memory. Content addressable memory allow us to recognize a pattern that we have seen before. Working memory is a short-term memory that can be overwritten quickly. It is a vital for human like intelligence such as reasoning. Implicit memory is a type of long term memory that allows you to use past experiences to remember things without thinking about them. For example, activities such as walking, riding a bike, and driving a car are things you do without thinking.

13 ImageNet ImageNet is a project that contains a large set of visual data which has been hand classified. It is used to train and test deep learning technology. Organized according to the WordNet hierarchy Images are described using a synset (synonym set) According to Tomaso Poggio “Deep networks trained with ImageNet seem to mimic not only the recognition performance but also the tuning properties of neurons in cortical areas of the visual cortex of monkeys.”

14 ILSVRC (ImageNet Large Scale Visual Recognition Challenge)
Annually Held Contest Participants train their algorithm using images from a dataset and then automatically label a set of test images. The challenge consists of three different tasks Image Classification Single Object Localization Object Detection Image classification in which the algorithm identifies all the objects in an image. Single object localization in which the algorithm identifies one instance of each object category and their location in the image. Object detection in which the algorithm identifies multiple instances of an object and their locations.

15 GoogLeNet Deep learning algorithm that was developed by google
In 2014 it was entered into ILSVRC it placed 1st in the image classification and object identification tasks. Image Classification- Error Percentage 6.66% Object Detection- Average Precision % Single Object Localization- Error Percentage %

16 GoogLeNet Vs Human GoogLeNet trained using 100,000 images
Annotator 1 (A1) trained using 500 images and then annotated 1500 images Annotator 2 (A2) trained using 100 images and then annotated 258 images An an experiment between 2 human annotators and GoogLeNet The first annotator was able to achieve a lower classification error than GoogLeNet even though they trained using a smaller set of training images. This shows that the brain can be trained using a much smaller set of images and is able to outperform deep learning technology. The second annotator achieved a much bigger error percentage which can be attributed to smaller set of training data.

17 Conclusion A better understanding of how the brain functions will provide valuable information to help further develop deep learning. Improvements in unsupervised learning will make deep learning more efficient as it removes the process of inputting labelled data Deep learning has to evolve much like the brain to be as efficient and effective as it.


Download ppt "The Relationship between Deep Learning and Brain Function"

Similar presentations


Ads by Google