Download presentation
Presentation is loading. Please wait.
Published bysalman hatamipour Modified over 2 years ago
1
Spiking-YOLO: Spiking Neural Network for Energy-Efficient Object Detection
2
01 02 03 04 Introduction Related work DNN-to-SNN conversion Object detection Methods Channel-wise data-based normalization Signed neuron featuring imbalanced threshold Evaluation Experimental setup Experimental results
3
Introduction
4
Recent success of DNNs: I.The development of high-performance computing systems II.The availability of large amounts of dada for model training. Solving more intriguing and advanced problems: I.(Requires) More sophisticated models and training data II.(Results in) increase in computational overhead and power cons. To overcome these challenges: I.Pruning II.Compression III.Quantization
5
Introduction Spiking neural networks(SNNs), which are the third generation neural networks, were introduced to mimic how information is encoded and processed in the human brain, by employing spiking neurons as computational units. 01 02 03 SNNs transmit information via the precise timing(temporal) of spike trains. Providing sparse yet powerful computing ability. Spiking neurons integrate inputs into a membrane potential when spikes are received and generate (Fire) spikes when the membrane potential reaches a certain threshold. 04 SNNs offer exceptional power efficiency and are the preferred in neuromorphic architectures.
6
SNNs, have been limited to relatively simple tasks and small datasets. One of the primary reasons for the limited application scope is the lack of scalable training algorithms.
7
DNN-to-SNN conversion methods: Are based on the idea of importing pre-trained parameters (weights and biases) from a DNN to an SNN.
8
Object Detection, Using DNN-to-SNN conversion methods Recognizing multiple overlapped objects Calculating precise coordinates High numerical precision in predicting the output values of neural networks
9
Several issues arise when object detection is applied in deep SNNs: Inefficiency of conventional normalization meth ods Absence of an efficient implementation method of leaky- ReLU in an SNN domain
10
channel-wise normalization To overcome these issues signed neuron with imbalanced thresh old
11
Related work
12
DNN-to-SNN conversions
13
DNN-to-SNN conversion Due to the event-driven nature, SNNs offer energy-efficient operati ons Unsupervised learning with (STDP) Training method of SNNs Supervised learning wit h gradient descent and error back-propagation. Training method of SNNs They are difficult to train
14
Biologically more plausible STDP Learning performance is significantly lower than that of supervised learni ng STDP As an alternative approach, the conversion of DNN to SNN has been recently proposed More previous works, have been limited to the image classification tasks.
15
Agenda Style Object detection Object detection locates a single or multiple object(s), then Identify their classes. These model consist of not only a classifier, but also a regressor that predicts the precise coordinates. predicts the precise coordinates of bounding boxes is critical.
16
Region-based CNN Fast R-CNN Faster R-CNN Mask R-CNN R-CNN based networks suffer from a slow inference speed due to multiple-stage detection schemes and thus are not suitable for real-time object detection.
17
An alternative: One-stage detection method Where the bounding box information is extracted, and the objects are classified in a unified network. Single-shot multi-box detector (SSD) You-only-look-once (YOLO)
18
Methods
19
Object detection Challenges: High numerical precision is required Conventional DNN-to-SNN methods suffer from perform ance degradation
20
Possible explanation for performance degradation : An extremely low firing rate in numerous neuro ns Lack of an efficient implementation method of Leaky-ReL U
21
channel-wise normalization To overcome these issues signed neuron with imbalanced thresh old
22
Agenda Style Channel-wise normalization To prevent under- or over-activation, both the weights and the threshold need to be carefully chosen.
23
Agenda Style Channel-wise normalization Layer-norm, normalize weights in a specific layer using the maximum activation Layer-norm can be calculated by When object detection is applied in deep SNNs using conventional normalization methods, the model suffer from significant performance degradation.
24
Agenda Style Channel-wise normalization X: channel index Y: Normalized maximum activation value Figure1: the deviation of the normalized activations on each channel is relatively large
25
Layer-norm yields exceptionally small normalized activations (i.e., under-activation) in numerous channels that had relatively small activation values prior to the normalization. These extremely small normalized activations were undetected in image classification, but can be extremely problematic in solving regression problems in deep SNNs. Consequently, extremely small normalized activations yield low firing rates which results in information loss when the number of time steps is less than what it needs to be.
26
Agenda Style Channel-wise normalization To enable fast and efficient information transmission in deep SNNs. This method normalize the weights by the maximum possible activation. The maximum activation is calculated from the training dataset. Proposed normalization method
28
Channel-wise manner eliminates extremely small activations (under-activation) which had small activation values prior to the normalization. Neurons are normalized to obtain a higher yet proper firing rate, which leads to accurate information transmission in a short period of time.
29
Analysis of improved firing rate For Channel-norm, numerous neurons generated a firing rate of up to 80% In the Layer-norm firing rate is in the range between 0% and 3.5%
30
Analysis of improved firing rate Channel-norm, produces a much higher firing rate in majority of the channels numerous neurons are firing more regularly when channel-norm is applied.
31
Agenda Style Signed neuron featuring imbalanced threshold Limitation of Leaky-ReLU implementation in SNNs
32
Agenda Style Signed neuron featuring imbalanced threshold The notion of imbalanced threshold The underlying dynamics of signed neuron with ITB:
34
Evaluation
35
Experimental setup As the first step towards object detection in deep SNNs, we used a real-time object detection model, Tiny YOLO, which is a simpler but efficient version of YOLO. Tiny YOLO is tested on non-trivial datasets, PASCAL VOC and MS COCO. Our simulation is based on the TensorFlow Eager a nd we conducted all experiments on NVIDIA Tesla V100 GPUs
36
Spiking-YOLO detection results
39
Agenda Style Signed neuron featuring imbalanced threshold Spiking-YOLO energy efficiency To investigate energy efficiency of Spiking-YOLO, we considered two different appr oaches: a) computing operations of Spiking-YOLO and Tiny YOLO in digital signal proces sing b) b) Spiking-YOLO on neuromorphic chips vs. Tiny YOLO on GPUs. For a fair comparison, we focused solely on the computational power (MAC and AC) used to execute object detection on a single image. According to (Horowitz 2014), a 32-bit floating-point (FL) MAC operation consumes 4.6 pJ (0.9 + 3.7 pJ) and 0.9 pJ for an AC o peration. A 32-bit integer (INT) MAC operation consumes 3.2 pJ (0.1 + 3.1 pJ) and 0.1 pJ for an AC operation. Based on these measures, we calculated the energy consumption of Tiny YOLO and Spiking-YOLO by multiplying FLOPs (floating-point operations) and the en ergy consumption of MAC and AC operations calculated
40
Agenda Style Signed neuron featuring imbalanced threshold Spiking-YOLO energy efficiency
41
Agenda Style Signed neuron featuring imbalanced threshold Spiking-YOLO energy efficiency
42
Thank you Salman Hatamipour
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.