Presentation is loading. Please wait.

Presentation is loading. Please wait.

Neuro-RAM Unit in Spiking Neural Networks with Applications

Similar presentations


Presentation on theme: "Neuro-RAM Unit in Spiking Neural Networks with Applications"β€” Presentation transcript:

1 Neuro-RAM Unit in Spiking Neural Networks with Applications
Nancy Lynch, Cameron Musco and Merav Parter

2 High Level Goal Study biologically neural networks from a distributed, algorithmic perspective.

3 Neural Networks Nodes (neurons), Edges (synapses).
Directed weighted graph, weight indicates synaptic strength. Two types of neurons: Excitatory (all positive) and Inhibitory (all negative). 𝑒 𝑣

4 Modeling Spiking Neurons
Node (e.g. neuron) is a probabilistic threshold gate Neuron’s threshold (bias): 𝒃 π‘Š=: weighted sum of neighbors 1 𝑒 βˆ’1 3 2 Det. Neuron Spiking Neuron weight 𝑏 Firing probability Fires with probability 1 1+ 𝑒 βˆ’(π‘Šβˆ’π‘) π‘Ύβˆ’π’ƒβ‰₯𝟎 1/2

5 Modeling Spiking Neural Networks
1 2 βˆ’3 βˆ’2 βˆ’5 4 Given: Weighted directed graph. Threshold for each neuron Initial states (0/1 to all nodes). Dynamics: Synchronous discrete rounds. Firing in round 𝑑 depends on firing in round π‘‘βˆ’1.

6 Computational Problems in SNN
Target function 𝑓: 0,1 𝑛 β†’ 0,1 π‘š Complexity measures: Size (number of aux. neurons) Time (number of rounds to convergence) 𝑋 1 1 Auxiliary Neurons π‘Œ 1

7 The Main Theme Previous work [Lynch, Musco, P β€˜17]:
Computational tradeoffs Role of randomness Previous work [Lynch, Musco, P β€˜17]: Neural leader election Randomness was crucial to break symmetry

8 Randomness Helps for Breaking Symmetry
``Neural Leader Election”: Given a subset of firing neurons, design a circuit that converges fast to one firing neuron 1 Input Output Auxiliary Neurons Theorem [Lynch-Musco-P’17]: Circuit with Θ( log log 𝑛/ log 𝑑 ) auxiliary neurons converging in 𝑑-rounds.

9 This Work: Similarity Testing
𝑋 1 𝑋 2 Auxiliary Neurons 1 𝑖𝑓 𝑋 1 = 𝑋 2 0 𝑖𝑓 𝑋 1 β‰  𝑋 2

10 Similarity Testing - Equality
𝑋 1 Solution with two auxiliary threshold gates Not biologically plausible Not clear how to obtain sublinear size even with spiking neurons. 𝑋 2 𝑖 𝑖 2 𝑖 βˆ’2 𝑖 2 𝑖 βˆ’2 𝑖 𝑋 1 β‰₯ 𝑋 2 𝑋 2 β‰₯ 𝑋 1 1 1 𝑋 1 β‰₯ 𝑋 2 𝐴𝑁𝐷 𝑋 2 β‰₯ 𝑋 1

11 Approximation Similarity Testing
Distinguish between: 𝑋 1 = 𝑋 2 vs. π»π‘Žπ‘š 𝑋 1 , 𝑋 2 β‰₯πœ–π‘› 𝑋 1 𝑋 2 Auxiliary Neurons Goal: sublinear size

12 Approximate Equality 𝑋 1 𝑋 2
Distinguish between: 𝑋 1 = 𝑋 2 vs. π»π‘Žπ‘š 𝑋 1 , 𝑋 2 β‰₯πœ–π‘› 𝑋 1 𝑋 2 𝑖 𝑖 Idea: (I) Select log 𝑛/πœ– indices at random (II) Check if 𝑋 1 , 𝑋 2 match in these indices Implementation: Encoding random index log n spiking neurons Random access memory

13 Key Building Block: Neuro-RAM
𝑋 π‘Œ 𝐼𝑁𝐷𝐸𝑋: 0,1 𝑛+ log 𝑛 β†’ 0,1 1 1 1 Input: 𝑛 bit vector 𝑿, index vector 𝒀 Output: 𝑿 𝒀 Auxiliary Neurons 𝑍 The Neuro-RAM construction is deterministic But the module is used when Y is random 1

14 Solving Approximate Testing with Neuro-RAM
𝑋 1 𝑋 2 1 1 1 1 1 log 𝑛 random neurons 𝑁𝑅 2,1 𝑁𝑅 1,1 = π‘Œ 1 1 … β‹€ … 𝑁𝑅 1,π‘˜ 𝑁𝑅 2,π‘˜ 1 = π‘Œ π‘˜ 1

15 Main Results (Neuro-RAM)
Theorem 1 (Upper Bound): Deterministic circuit with 𝑢 𝒏 𝒕 auxiliary neurons that implements Neuro-RAM in 𝑢 𝒕 rounds (π‘‘β‰€βˆšπ‘›). Theorem 2 (Lower Bound): Every 𝑑-round randomized circuit requires 𝑢 𝒏 𝒕 log 𝟐 𝒏 auxiliary neurons.

16 Main Results (Applications)
Theorem 3 (Approximate Similarity Testing): There is SNN with 𝑢( 𝒏 π’π’π’ˆ 𝒏/𝝐) auxiliary neurons that solves πœ–βˆ’approx. equality in 𝑂 𝑛 rounds. (if 𝑿 𝟏 = 𝑿 𝟐 , w.h.p. fires 1, if 𝑯𝑨𝑴 𝑿 𝟏 , 𝑿 𝟐 β‰₯𝝐𝒏 does not fire w.h.p.) Theorem 4 (Compression): Implementing JL random projection from dimension 𝑫 to 𝒅<𝑫 using 𝑢( 𝑫 𝒅 ) Neuro-RAM modules each with 𝑢 𝑫 neurons. Beating the naΓ―ve solution when 𝒅> 𝑫 .

17 High Level Idea of Neuro-RAM Module
Here: 𝑢 𝒏 auxiliary neurons that implements Neuro-RAM in 𝑢 𝒏 rounds. Divide 𝑛 input neurons 𝑿 into 𝑛 buckets. Divide log𝑛 index neurons 𝒀 into two. Step 1: select bucket 𝑿 π’Š using first half of 𝒀 𝑋 3 𝑋 2 𝑋 1 𝑋 0 π‘Œ 1 π‘Œ 2 [10 01] [11 10] [11 10] [10 10] [ ] 𝑒 3 𝑒 2 𝑒 1 𝑒 0 𝐸

18 High Level Idea of Neuro-RAM Module
Here: 𝑢 𝒏 auxiliary neurons that implements Neuro-RAM in 𝑢 𝒏 rounds. 𝑋 3 𝑋 2 𝑋 1 𝑋 0 π‘Œ 1 π‘Œ 2 [10 01] [11 10] [11 10] [1𝟎 10] [ ] 𝑒 3 𝑒 2 𝑒 1 𝑒 0 𝐸 Selecting 𝑒 𝑖 with 𝑖=𝑑𝑒𝑐 π‘Œ 1 The non-selected 𝑒 𝑗 are depressed and will not fire.

19 High Level Idea of Neuro-RAM Module
Here: 𝑢 𝒏 auxiliary neurons that implements Neuro-RAM in 𝑢 𝒏 rounds. 𝑋 3 𝑋 2 𝑋 1 𝑋 0 π‘Œ 1 π‘Œ 2 [10 01] [11 10] [11 10] [1𝟎 10] [ ] 2 π‘›βˆ’π‘— 2 𝑛 𝑒 3 𝑒 2 𝑒 1 𝑒 0 𝐸 Next: use 𝑂 𝑛 decoding neurons to decode the value of the bit 𝑑𝑒𝑐 π‘Œ 𝑖

20 High Level Idea of Neuro-RAM Module
Here: 𝑢 𝒏 auxiliary neurons that implements Neuro-RAM in 𝑢 𝒏 rounds. 𝑋 3 𝑋 2 𝑋 1 𝑋 0 π‘Œ 1 π‘Œ 2 [10 01] [11 10] [11 10] [10 𝟏0] [ ] 2 π‘›βˆ’π‘— 2 𝑛 Successive decoding: In the beginning 𝑒 3 fires only if first bit in 𝑋 3 fires. In step 𝑗, 𝑒 3 fires only if 𝑗 π‘‘β„Ž bit in 𝑋 3 fires. This continues until getting a stopping signal from selected 𝑑 𝑗 𝑒 3 𝐸 𝐷 𝑑 0 𝑑 1 𝑑 2 𝑑 3

21 Lower Bound for Neuro-RAM with Spiking Neurons
Theorem 2 (Lower Bound): Every randomized SNN that solves the INDEX function in 𝑑 rounds, requires 𝑢 𝒏 𝒕 log 𝟐 𝒏 auxiliary neurons. Roadmap: Step 1: Reduction from SNN to Deterministic Circuit Step 2: Lower Bound for Det. Circuit via VC dimension

22 Step 0: Reduction from SNN to Feed-Forward SNN
𝑑-round SNN with π‘Ž aux. neurons solving INDEX with high probability Feedforward SNN with O(π‘Žπ‘‘) neurons 1 1 1 2 π‘‘βˆ’1 … 1 π‘‘βˆ’1

23 Step 1: Reduction to Distribution of Deterministic Circuits
Probabilistic Circuit π‘ͺ 𝑹 Distribution β„‹ over deterministic circuits 𝑋 … … 𝑧 Goal: find β„‹, s.t. for every input 𝑋 π‘ƒπ‘Ÿπ‘œ 𝑏 𝐢 𝑅 𝑧=1 𝑋 =π‘ƒπ‘Ÿπ‘œ 𝑏 β„‹ 𝑧=1 | 𝑋

24 Step 1: Reduction to Distribution of Deterministic Circuits
Probabilistic Circuit 𝐢 𝑅 Distribution β„‹ over deterministic circuits 𝑋 … … π‘Žβ‹…π‘‘ aux. neurons 𝑧 Spiking Neuron u with threshold b(u): Fires with probability Det. threshold neuron 𝑒 whose threshold is sampled from a logistic distribution with mean 𝑏(𝑒) 1 1+ 𝑒 βˆ’(π‘€βˆ’π‘(𝑒))

25 Step 2: Reducing to Deterministic Circuits for Subset of Inputs
𝑋 π‘Œ A deterministic circuit that solves INDEX for most X values This circuits solves 2 π‘›βˆ’1 functions: 𝒇 𝑿 : 𝟎,𝟏 π’π’π’ˆ 𝒏 β†’ 𝟎,𝟏 , 𝒇 𝑿 (𝒀)= 𝐗 𝐘 β†’ 𝑽π‘ͺ π‘ͺ 𝑫 β‰₯𝚯( 𝒏 π’π’π’ˆπ’ ) But a FF circuit with π‘š gates has VC at most π‘š/logβ‘π‘š We have 𝒂⋅𝒕=𝛀(𝒏/ π₯𝐨 𝐠 𝟐 𝒏) … Circuit is correct 2 𝑛 possible X values

26 Summing Up: 𝐢 𝐷 … Exists det. circuit with 𝑢 𝒂⋅𝒕 gates that solves
FF deterministic circuits for INDEX … … FF SNN for INDEX SNN for INDEX Exists det. circuit with 𝑢 𝒂⋅𝒕 gates that solves INDEX for half of X-space π‘ͺ 𝑫 has large VC as solves many functions. Hence must have many gates. 𝐢 𝐷

27 Main Results (Neuro-RAM)
Theorem 1 (Upper Bound): Deterministic circuit with 𝑢 𝒏 𝒕 auxiliary neurons that implements Neuro-RAM in 𝑢 𝒕 rounds (π‘‘β‰€βˆšπ‘›). Theorem 2 (Lower Bound): Every 𝑑-round randomized circuit requires 𝑢 𝒏 𝒕 log 𝟐 𝒏 auxiliary neurons.

28 Take Home Message Thank You! (Toda Raba!)
Biologically inspired model for SNN Main questions: computational tradeoffs, role of randomness. Randomness not needed for neuro-RAN But does help for similarity testing and compression Remaining problems: exact equality? simpler circuits? Thank You! (Toda Raba!)


Download ppt "Neuro-RAM Unit in Spiking Neural Networks with Applications"

Similar presentations


Ads by Google