Download presentation
Presentation is loading. Please wait.
Published byStefan van der Horst Modified over 5 years ago
1
Neuro-RAM Unit in Spiking Neural Networks with Applications
Nancy Lynch, Cameron Musco and Merav Parter
2
High Level Goal Study biologically neural networks from a distributed, algorithmic perspective.
3
Neural Networks Nodes (neurons), Edges (synapses).
Directed weighted graph, weight indicates synaptic strength. Two types of neurons: Excitatory (all positive) and Inhibitory (all negative). π’ π£
4
Modeling Spiking Neurons
Node (e.g. neuron) is a probabilistic threshold gate Neuronβs threshold (bias): π π=: weighted sum of neighbors 1 π’ β1 3 2 Det. Neuron Spiking Neuron weight π Firing probability Fires with probability 1 1+ π β(πβπ) πΎβπβ₯π 1/2
5
Modeling Spiking Neural Networks
1 2 β3 β2 β5 4 Given: Weighted directed graph. Threshold for each neuron Initial states (0/1 to all nodes). Dynamics: Synchronous discrete rounds. Firing in round π‘ depends on firing in round π‘β1.
6
Computational Problems in SNN
Target function π: 0,1 π β 0,1 π Complexity measures: Size (number of aux. neurons) Time (number of rounds to convergence) π 1 1 Auxiliary Neurons π 1
7
The Main Theme Previous work [Lynch, Musco, P β17]:
Computational tradeoffs Role of randomness Previous work [Lynch, Musco, P β17]: Neural leader election Randomness was crucial to break symmetry
8
Randomness Helps for Breaking Symmetry
``Neural Leader Electionβ: Given a subset of firing neurons, design a circuit that converges fast to one firing neuron 1 Input Output Auxiliary Neurons Theorem [Lynch-Musco-Pβ17]: Circuit with Ξ( log log π/ log π‘ ) auxiliary neurons converging in π‘-rounds.
9
This Work: Similarity Testing
π 1 π 2 Auxiliary Neurons 1 ππ π 1 = π 2 0 ππ π 1 β π 2
10
Similarity Testing - Equality
π 1 Solution with two auxiliary threshold gates Not biologically plausible Not clear how to obtain sublinear size even with spiking neurons. π 2 π π 2 π β2 π 2 π β2 π π 1 β₯ π 2 π 2 β₯ π 1 1 1 π 1 β₯ π 2 π΄ππ· π 2 β₯ π 1
11
Approximation Similarity Testing
Distinguish between: π 1 = π 2 vs. π»ππ π 1 , π 2 β₯ππ π 1 π 2 Auxiliary Neurons Goal: sublinear size
12
Approximate Equality π 1 π 2
Distinguish between: π 1 = π 2 vs. π»ππ π 1 , π 2 β₯ππ π 1 π 2 π π Idea: (I) Select log π/π indices at random (II) Check if π 1 , π 2 match in these indices Implementation: Encoding random index log n spiking neurons Random access memory
13
Key Building Block: Neuro-RAM
π π πΌππ·πΈπ: 0,1 π+ log π β 0,1 1 1 1 Input: π bit vector πΏ, index vector π Output: πΏ π Auxiliary Neurons π The Neuro-RAM construction is deterministic But the module is used when Y is random 1
14
Solving Approximate Testing with Neuro-RAM
π 1 π 2 1 1 1 1 1 log π random neurons ππ
2,1 ππ
1,1 = π 1 1 β¦ β β¦ ππ
1,π ππ
2,π 1 = π π 1
15
Main Results (Neuro-RAM)
Theorem 1 (Upper Bound): Deterministic circuit with πΆ π π auxiliary neurons that implements Neuro-RAM in πΆ π rounds (π‘β€βπ). Theorem 2 (Lower Bound): Every π‘-round randomized circuit requires πΆ π π log π π auxiliary neurons.
16
Main Results (Applications)
Theorem 3 (Approximate Similarity Testing): There is SNN with πΆ( π πππ π/π) auxiliary neurons that solves πβapprox. equality in π π rounds. (if πΏ π = πΏ π , w.h.p. fires 1, if π―π¨π΄ πΏ π , πΏ π β₯ππ does not fire w.h.p.) Theorem 4 (Compression): Implementing JL random projection from dimension π« to π
<π« using πΆ( π« π
) Neuro-RAM modules each with πΆ π« neurons. Beating the naΓ―ve solution when π
> π« .
17
High Level Idea of Neuro-RAM Module
Here: πΆ π auxiliary neurons that implements Neuro-RAM in πΆ π rounds. Divide π input neurons πΏ into π buckets. Divide logπ index neurons π into two. Step 1: select bucket πΏ π using first half of π π 3 π 2 π 1 π 0 π 1 π 2 [10 01] [11 10] [11 10] [10 10] [ ] π 3 π 2 π 1 π 0 πΈ
18
High Level Idea of Neuro-RAM Module
Here: πΆ π auxiliary neurons that implements Neuro-RAM in πΆ π rounds. π 3 π 2 π 1 π 0 π 1 π 2 [10 01] [11 10] [11 10] [1π 10] [ ] π 3 π 2 π 1 π 0 πΈ Selecting π π with π=πππ π 1 The non-selected π π are depressed and will not fire.
19
High Level Idea of Neuro-RAM Module
Here: πΆ π auxiliary neurons that implements Neuro-RAM in πΆ π rounds. π 3 π 2 π 1 π 0 π 1 π 2 [10 01] [11 10] [11 10] [1π 10] [ ] 2 πβπ 2 π π 3 π 2 π 1 π 0 πΈ Next: use π π decoding neurons to decode the value of the bit πππ π π
20
High Level Idea of Neuro-RAM Module
Here: πΆ π auxiliary neurons that implements Neuro-RAM in πΆ π rounds. π 3 π 2 π 1 π 0 π 1 π 2 [10 01] [11 10] [11 10] [10 π0] [ ] 2 πβπ 2 π Successive decoding: In the beginning π 3 fires only if first bit in π 3 fires. In step π, π 3 fires only if π π‘β bit in π 3 fires. This continues until getting a stopping signal from selected π π π 3 πΈ π· π 0 π 1 π 2 π 3
21
Lower Bound for Neuro-RAM with Spiking Neurons
Theorem 2 (Lower Bound): Every randomized SNN that solves the INDEX function in π‘ rounds, requires πΆ π π log π π auxiliary neurons. Roadmap: Step 1: Reduction from SNN to Deterministic Circuit Step 2: Lower Bound for Det. Circuit via VC dimension
22
Step 0: Reduction from SNN to Feed-Forward SNN
π‘-round SNN with π aux. neurons solving INDEX with high probability Feedforward SNN with O(ππ‘) neurons 1 1 1 2 π‘β1 β¦ 1 π‘β1
23
Step 1: Reduction to Distribution of Deterministic Circuits
Probabilistic Circuit πͺ πΉ Distribution β over deterministic circuits π β¦ β¦ π§ Goal: find β, s.t. for every input π πππ π πΆ π
π§=1 π =πππ π β π§=1 | π
24
Step 1: Reduction to Distribution of Deterministic Circuits
Probabilistic Circuit πΆ π
Distribution β over deterministic circuits π β¦ β¦ πβ
π‘ aux. neurons π§ Spiking Neuron u with threshold b(u): Fires with probability Det. threshold neuron π’ whose threshold is sampled from a logistic distribution with mean π(π’) 1 1+ π β(π€βπ(π’))
25
Step 2: Reducing to Deterministic Circuits for Subset of Inputs
π π A deterministic circuit that solves INDEX for most X values This circuits solves 2 πβ1 functions: π πΏ : π,π πππ π β π,π , π πΏ (π)= π π β π½πͺ πͺ π« β₯π―( π ππππ ) But a FF circuit with π gates has VC at most π/logβ‘π We have πβ
π=π(π/ π₯π¨ π π π) β¦ Circuit is correct 2 π possible X values
26
Summing Up: πΆ π· β¦ Exists det. circuit with πΆ πβ
π gates that solves
FF deterministic circuits for INDEX β¦ β¦ FF SNN for INDEX SNN for INDEX Exists det. circuit with πΆ πβ
π gates that solves INDEX for half of X-space πͺ π« has large VC as solves many functions. Hence must have many gates. πΆ π·
27
Main Results (Neuro-RAM)
Theorem 1 (Upper Bound): Deterministic circuit with πΆ π π auxiliary neurons that implements Neuro-RAM in πΆ π rounds (π‘β€βπ). Theorem 2 (Lower Bound): Every π‘-round randomized circuit requires πΆ π π log π π auxiliary neurons.
28
Take Home Message Thank You! (Toda Raba!)
Biologically inspired model for SNN Main questions: computational tradeoffs, role of randomness. Randomness not needed for neuro-RAN But does help for similarity testing and compression Remaining problems: exact equality? simpler circuits? Thank You! (Toda Raba!)
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.