Efficient GPU training of LSNNs using eProp

Publication typeProceedings Article
Publication date2022-03-28
Abstract
Taking inspiration from machine learning libraries – where techniques such as parallel batch training minimise latency and maximise GPU occupancy – as well as our previous research on efficiently simulating Spiking Neural Networks (SNNs) on GPUs for computational neuroscience, we have extended our GeNN SNN simulator to enable spike-based machine learning research on general purpose hardware. We demonstrate that SNN classifiers implemented using GeNN and trained using the eProp learning rule can provide comparable performance to those trained using Back Propagation Through Time and show that the latency and energy usage of our SNN classifiers is up to 7 × lower than an LSTM running on the same GPU hardware.
Found 
Found 

Top-30

Journals

1
Frontiers in Neuroinformatics
1 publication, 16.67%
Nature Machine Intelligence
1 publication, 16.67%
Neuromorphic Computing and Engineering
1 publication, 16.67%
1

Publishers

1
Frontiers Media S.A.
1 publication, 16.67%
Springer Nature
1 publication, 16.67%
Association for Computing Machinery (ACM)
1 publication, 16.67%
Cold Spring Harbor Laboratory
1 publication, 16.67%
IOP Publishing
1 publication, 16.67%
Institute of Electrical and Electronics Engineers (IEEE)
1 publication, 16.67%
1
  • We do not take into account publications without a DOI.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
6
Share