Open Access
,
pages 422-438
Efficient Training of Spiking Neural Networks with Multi-parallel Implicit Stream Architecture
Zhigao Cao
1, 2
,
Meng Li
1, 2
,
Xiashuang Wang
3
,
Haoyu Wang
1, 2
,
Fan Wang
1, 2
,
Youjun Li
1, 2
,
Zi-Gang Huang
1, 2
3
The Second Academy of China Aerospace Science and Industry Corporation, Beijing, China
|
Publication type: Book Chapter
Publication date: 2024-10-31
scimago Q2
SJR: 0.352
CiteScore: 2.4
Impact factor: —
ISSN: 03029743, 16113349, 18612075, 18612083
Abstract
Spiking neural networks (SNNs) are a novel type of bio-plausible neural network with energy efficiency. However, SNNs are non-differentiable and the training memory costs increase with the number of simulation steps. To address these challenges, this work introduces an implicit training method for SNNs inspired by equilibrium models. Our method relies on the multi-parallel implicit stream architecture (MPIS-SNNs). In the forward process, MPIS-SNNs drive multiple fused parallel implicit streams (ISs) to reach equilibrium state simultaneously. In the backward process, MPIS-SNNs solely rely on a single-time-step simulation of SNNs, avoiding the storage of a large number of activations. Extensive experiments on N-MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100 demonstrate that MPIS-SNNs exhibit excellent characteristics such as low latency, low memory cost, low firing rates, and fast convergence speed, and are competitive among the latest efficient efficient training methods for SNNs. Our code is available at an anonymized GitHub repository:
https://github.com/kiritozc/MPIS-SNNs
.
Found
Nothing found, try to update filter.
Are you a researcher?
Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
0
Total citations:
0
Cite this
GOST |
RIS |
BibTex
Cite this
GOST
Copy
Cao Z. et al. Efficient Training of Spiking Neural Networks with Multi-parallel Implicit Stream Architecture // Lecture Notes in Computer Science. 2024. pp. 422-438.
GOST all authors (up to 50)
Copy
Cao Z., Li M., Wang X., Wang H., Wang F., Li Y., Huang Z. Efficient Training of Spiking Neural Networks with Multi-parallel Implicit Stream Architecture // Lecture Notes in Computer Science. 2024. pp. 422-438.
Cite this
RIS
Copy
TY - GENERIC
DO - 10.1007/978-3-031-72754-2_24
UR - https://link.springer.com/10.1007/978-3-031-72754-2_24
TI - Efficient Training of Spiking Neural Networks with Multi-parallel Implicit Stream Architecture
T2 - Lecture Notes in Computer Science
AU - Cao, Zhigao
AU - Li, Meng
AU - Wang, Xiashuang
AU - Wang, Haoyu
AU - Wang, Fan
AU - Li, Youjun
AU - Huang, Zi-Gang
PY - 2024
DA - 2024/10/31
PB - Springer Nature
SP - 422-438
SN - 0302-9743
SN - 1611-3349
SN - 1861-2075
SN - 1861-2083
ER -
Cite this
BibTex (up to 50 authors)
Copy
@incollection{2024_Cao,
author = {Zhigao Cao and Meng Li and Xiashuang Wang and Haoyu Wang and Fan Wang and Youjun Li and Zi-Gang Huang},
title = {Efficient Training of Spiking Neural Networks with Multi-parallel Implicit Stream Architecture},
publisher = {Springer Nature},
year = {2024},
pages = {422--438},
month = {oct}
}