Presentation on theme: "University Studies 15A: Consciousness I Neural Network Modeling (Round 2)"— Presentation transcript:
University Studies 15A: Consciousness I Neural Network Modeling (Round 2)
Let us begin again with the problem that neuroscientists confronted with the 100-step rule. That is, they knew that whatever the brain was doing, it was using massive parallelism to produce responses in no more that 100 hundred sequences of neurons firing. So, we have a brain:
If we unfold and flatten the neocortex we get: Two sheets of interconnected,six- layered neuronal assemblies, connected by the corpus callosum The work of the brain is done by the neuronal clusters becoming activated and activating additional neuronal clusters in turn.
Experiential Level: Seeing, hearing, remembering, deciding, acting Brain Level: One set of neurons becomes activated, activating another set, which in turn activates yet another set. This continues through 100 steps of transmitted activation. For example:
Information about light comes in from the retina to the primary visual cortex:
The primary visual cortex passes the activation on to higher processing layers:
Each layer processes the activation by aggregating large number of simple patterns into a smaller number of more complex patterns: Remember that we started with just angled line segments in the “simple cells: of V1. And keep those feedback connection in mind. What we “see” is a complex reconstruction built from many layers of input that had been divided into separate streams and reassembled.
To construct what we “see,” activation passes from the visual cortex to layers of neurons that synthesize the input from different sensory sources:
The neural clusters embodying the Visual System then continue further and connect to those that embody the Semantic Systems and visual object recognition: Despite the many layers, there are fewer than one hundred layer. Because the brain is processing all the activation information in parallel, the activation passes quickly from layer to layer.
So, when you see this picture Your visual system very quickly uses the feedback connections from higher memory of objects and draws on your knowledge of dalmatians to fill in the missing information.
Experiential Level: Seeing, hearing, remembering, deciding, acting Brain Level: One set of neurons becomes activated, activating another set, which in turn activates yet another set. This continues through 100 steps of transmitted activation. How the Trick is Done: Neural Networks
Each level of processing in the brain is a “cell assembly,” that is, layer of neurons.
This is a cell assembly of neurons in a fly’s retina.
This picture shows the connection of one layer to neurons to the next.
The connection of neurons of one layer to those of the next is through the synaptic junction: Various factors control the strength of the connection between neurons at the synaptic junction: Number of vesicles of neurotransmitter in the “sending” neuron Number of receptors on the “receiving” neuron Structural changes in the junction gap. All of these can change
Back to the Neuron and our Schematic Model of one
a(1) a(2) a(3) a(i) a(n) b(j) … … OjOj b(m) b(1) … … OmOm O1O1 Our Schematic Representation of Layers of Cell Assemblies: The strength of the synaptic connection between neurons in the two layers—a(i) and b(j)—is represented by w i,j.
This set of weights defines a weighting matrix of dimension (m,n) (columns for Layer A, rows for Layer B) W n,m =
Experiential Level: Seeing, hearing, remembering, deciding, acting Brain Level: One set of neurons becomes activated, activating another set, which in turn activates yet another set. This continues through 100 steps of transmitted activation. How the Trick is Done: Neural Networks At each step, the activation of a Layer B derives from the sum of the input synaptic activations from Layer A multiplied by the strength of the synaptic junction. Or: = W ∙ Since all the neurons in Layer B are receiving input at the same time, calculating the activation of the entire layer occurs simultaneously.
Experiential Level: Learning and memory Brain Level: Neurons in one cell assembly change the strength of their connections to neurons in the next cell assembly by changing the structure of the synaptic connection. How we represent memory and learning in Neural Network models Memory: all memory resides in the Weighting Matrices that represents the structure of synaptic connections in the system Activation-based Learning: changing weights in the Weighting Matrices, using Hebb’s Rule. ∆w ij = a i b j
At every level, as the brain passes activation from layer to layer, they adjust their patterns of synaptic strength.
At every level, the boxes representing functional units in the brain actually have their own internal structures of cell assemblies, and these also have their own changing patterns of synaptic connections, their own W.
Experiential Level: “Things” in memory: apples, houses, words, ideas Brain Level: They are all patterns of synaptic connections. Modeling: Learned abilities: riding a bicycle It is all the Ws. This has implications, because the layers in a network operate as a system rather than as independent neurons.
Remember our simple set of artificial neurons: 1.Sixteen input units are connected to two output units 2.Only two input units are active at a time. 3.They must be horizontal or vertical neighbors 4.Only one output unit can be active at a time (inhibition is marked by the black dots). Trial 1 Trial 2 Trial 3
If one used this as a “perception” unit that passed its internal state onto other layers, those other layers would only know of two “objects” activated by the input layer. How it would “see” the 16 input units would vary from Trial 1 to 3, but it divides the input space into just two “things” as patterns of connection. We have trained this network on a simulation.
ALL “objects” from cars and people to concepts like cuteness or justice, are mutually defined partitions in very, very high level input spaces.
Neural networks extract patterns and divide an input space. This can lead to odd results with implications for biological neural networks. David McClelland tested the ability of a neural network to build a classification tree based on closeness of attributes. He built a network that could handle simple property statements like: Robin can grow, move, fly. Oak can grow. Salmon has scales, gills, skin. Robin has wings, feathers, skin. Oak has bark, branches, leaves, roots.
Baars and Gage discuss this and give the design:
Neural Network software turns this sort of design into a computer program to simulate the network:
What Baars and Gage do not discuss was the next step. McClelland fed the system facts about penguins: Penguin can swim, move, grow. Penguin has wings, feathers, skin. When one runs the simulation, the result is a tree that did a good job:
The results were profoundly different if they gave the facts about penguins interleaved with facts about the other objects or if it was all penguins all the time: We’ll come back to this result when we discuss memory and sleep.
An important aspect of neural networks in the brain that people explored through artificial networks is the brain’s use of recurrency, when nodes in networks loop back on themselves. Simulated models show that one absolutely crucial feature of recurrent networks is the ability to complete partial patterns: The image of the Dalmatian is very incomplete, but the brain feeds back knowledge of Dalmatians to the visual system, which then produces a yet more complete view and cycles in loops until perception settles into “Dalmation. ” Artificial neural networks like the “penguin learner” allow researchers to model the behavior of neural systems
These sorts of pattern-completing, self-modifying networks appear throughout the brain. Baars and Gage stress that 90% of the connections between the thalamus and V1 go from V1 to the thalamus as re-entrant connections rather than feed-forward input. Many neural net modelers have developed systems based on re-entrant brain connectivity:
Experiential LevelBiological LevelModeling Seeing, Deciding, Acting Layers of cell assemblies transmitting activation LearningAdjustment of synaptic strength in connections between neurons MemoryThe strength of synaptic connections maintained by the system of neuron assemblies “things:” all forms of internal representation Mutually differentiated patterns of activation within an over-all system Attractor basins (You really don’t want to know the details.) To sum up: = W ∙ ∆w ij = a i b j It’s all done with neural networks. W