Nature Machine Intelligence, volume 5, issue 5, pages 518-527

Accurate online training of dynamical spiking neural networks through Forward Propagation Through Time

Publication typeJournal Article
Publication date2023-05-08
scimago Q1
SJR5.940
CiteScore36.9
Impact factor18.8
ISSN25225839
Computer Networks and Communications
Artificial Intelligence
Software
Human-Computer Interaction
Computer Vision and Pattern Recognition
Abstract
With recent advances in learning algorithms, recurrent networks of spiking neurons are achieving performance that is competitive with vanilla recurrent neural networks. However, these algorithms are limited to small networks of simple spiking neurons and modest-length temporal sequences, as they impose high memory requirements, have difficulty training complex neuron models and are incompatible with online learning. Here, we show how the recently developed Forward-Propagation Through Time (FPTT) learning combined with novel liquid time-constant spiking neurons resolves these limitations. Applying FPTT to networks of such complex spiking neurons, we demonstrate online learning of exceedingly long sequences while outperforming current online methods and approaching or outperforming offline methods on temporal classification tasks. The efficiency and robustness of FPTT enable us to directly train a deep and performant spiking neural network for joint object localization and recognition, demonstrating the ability to train large-scale dynamic and complex spiking neural network architectures. Memory efficient online training of recurrent spiking neural networks without compromising accuracy is an open challenge in neuromorphic computing. Yin and colleagues demonstrate that training a recurrent neural network consisting of so-called liquid time-constant spiking neurons using an algorithm called Forward-Propagation Through Time allows for online learning and state-of-the-art performance at a reduced computational cost compared with existing approaches.
Bohnstingl T., Wozniak S., Pantazi A., Eleftheriou E.
2023-11-01 citations by CoLab: 27 Abstract  
Biological neural networks are equipped with an inherent capability to continuously adapt through online learning. This aspect remains in stark contrast to learning with error backpropagation through time (BPTT) that involves offline computation of the gradients due to the need to unroll the network through time. Here, we present an alternative online learning algorithm framework for deep recurrent neural networks (RNNs) and spiking neural networks (SNNs), called online spatio-temporal learning (OSTL). It is based on insights from biology and proposes the clear separation of spatial and temporal gradient components. For shallow SNNs, OSTL is gradient equivalent to BPTT enabling for the first time online training of SNNs with BPTT-equivalent gradients. In addition, the proposed formulation unveils a class of SNN architectures trainable online at low time complexity. Moreover, we extend OSTL to a generic form, applicable to a wide range of network architectures, including networks comprising long short-term memory (LSTM) and gated recurrent units (GRUs). We demonstrate the operation of our algorithm framework on various tasks from language modeling to speech recognition and obtain results on par with the BPTT baselines.
Zou Z., Alimohamadi H., Zakeri A., Imani F., Kim Y., Najafi M.H., Imani M.
Scientific Reports scimago Q1 wos Q1 Open Access
2022-05-10 citations by CoLab: 18 PDF Abstract  
Recently, brain-inspired computing models have shown great potential to outperform today’s deep learning solutions in terms of robustness and energy efficiency. Particularly, Spiking Neural Networks (SNNs) and HyperDimensional Computing (HDC) have shown promising results in enabling efficient and robust cognitive learning. Despite the success, these two brain-inspired models have different strengths. While SNN mimics the physical properties of the human brain, HDC models the brain on a more abstract and functional level. Their design philosophies demonstrate complementary patterns that motivate their combination. With the help of the classical psychological model on memory, we propose SpikeHD, the first framework that fundamentally combines Spiking neural network and hyperdimensional computing. SpikeHD generates a scalable and strong cognitive learning system that better mimics brain functionality. SpikeHD exploits spiking neural networks to extract low-level features by preserving the spatial and temporal correlation of raw event-based spike data. Then, it utilizes HDC to operate over SNN output by mapping the signal into high-dimensional space, learning the abstract information, and classifying the data. Our extensive evaluation on a set of benchmark classification problems shows that SpikeHD provides the following benefit compared to SNN architecture: (1) significantly enhance learning capability by exploiting two-stage information processing, (2) enables substantial robustness to noise and failure, and (3) reduces the network size and required parameters to learn complex information.
Mehonic A., Kenyon A.J.
Nature scimago Q1 wos Q1
2022-04-13 citations by CoLab: 314 Abstract  
New computing technologies inspired by the brain promise fundamentally different ways to process information with extreme energy efficiency and the ability to handle the avalanche of unstructured and noisy data that we are generating at an ever-increasing rate. To realize this promise requires a brave and coordinated plan to bring together disparate research communities and to provide them with the funding, focus and support needed. We have done this in the past with digital technologies; we are in the process of doing it with quantum technologies; can we now do it for brain-inspired computing? The benefits and future prospects of neuromorphic, or bio-inspired, computing technologies are discussed, as is the need for a global, coordinated approach to funding, research and collaboration.
Knight J.C., Nowotny T.
2022-03-28 citations by CoLab: 5 Abstract  
Taking inspiration from machine learning libraries – where techniques such as parallel batch training minimise latency and maximise GPU occupancy – as well as our previous research on efficiently simulating Spiking Neural Networks (SNNs) on GPUs for computational neuroscience, we have extended our GeNN SNN simulator to enable spike-based machine learning research on general purpose hardware. We demonstrate that SNN classifiers implemented using GeNN and trained using the eProp learning rule can provide comparable performance to those trained using Back Propagation Through Time and show that the latency and energy usage of our SNN classifiers is up to 7 × lower than an LSTM running on the same GPU hardware.
Miquel J.R., Tolu S., Scholler F.E., Galeazzi R.
2021-11-26 citations by CoLab: 6 Abstract  
The paper proposes a method to translate a deep convolutional neural network into an equivalent spiking neural network towards the fulfillment of robust object detection in a resource-constrained platform. The aim is to provide a conversion framework that is not restricted to shallow network structures and classification problems as in state-of-the-art conversion libraries. The results show that models of higher complexity, such as the RetinaNet object detector, can be converted through rate encoding of the activations with limited loss in performance.
Scherr F., Maass W.
2021-11-19 citations by CoLab: 4 Abstract  
AbstractThe neocortex can be viewed as a tapestry consisting of variations of rather stereotypical local cortical microcircuits. Hence understanding how these microcircuits compute holds the key to understanding brain function. Intense research efforts over several decades have culminated in a detailed model of a generic cortical microcircuit in the primary visual cortex from the Allen Institute. We are presenting here methods and first results for understanding computational properties of this largescale data-based model. We show that it can solve a standard image-change-detection task almost as well as the living brain. Furthermore, we unravel the computational strategy of the model and elucidate the computational role of diverse subtypes of neurons. Altogether this work demonstrates the feasibility and scientific potential of a methodology based on close interaction of detailed data and large-scale computer modelling for understanding brain function.
He Y., Corradi F., Shi C., Ding M., Timmermans M., Stuijt J., Harpe P., Ocket I., Liu Y.
2021-11-07 citations by CoLab: 11 Abstract  
This paper presents an event-driven neuromorphic sensing system capable of performing on-chip feature extraction and “send-on-delta” transmission for insertable cardiac monitoring. A background offset calibration improves the SNDR of clockless level-crossing ADCs. A fully synthesized spiking neural network extracts full ECG PQRST features with $\lt 1$ ms time precision. An event-driven body channel communication minimizes transmission energy. The prototype is fabricated in 40nm CMOS and consumes $28.2 \mu \mathrm{W}$ system power.
Chakraborty B., She X., Mukhopadhyay S.
2021-10-27 citations by CoLab: 31 Abstract  
This paper proposes a Fully Spiking Hybrid Neural Network (FSHNN) for energy-efficient and robust object detection in resource-constrained platforms. The network architecture is based on a Spiking Convolutional Neural Network using leaky-integrate-fire neuron models. The model combines unsupervised Spike Time-Dependent Plasticity (STDP) learning with back-propagation (STBP) learning methods and also uses Monte Carlo Dropout to get an estimate of the uncertainty error. FSHNN provides better accuracy compared to DNN based object detectors while being more energy-efficient. It also outperforms these object detectors, when subjected to noisy input data and less labeled training data with a lower uncertainty error.
Yin B., Corradi F., Bohté S.M.
Nature Machine Intelligence scimago Q1 wos Q1
2021-10-17 citations by CoLab: 136 Abstract  
Inspired by detailed modelling of biological neurons, spiking neural networks (SNNs) are investigated as biologically plausible and high-performance models of neural computation. The sparse and binary communication between spiking neurons potentially enables powerful and energy-efficient neural networks. The performance of SNNs, however, has remained lacking compared with artificial neural networks. Here we demonstrate how an activity-regularizing surrogate gradient combined with recurrent networks of tunable and adaptive spiking neurons yields the state of the art for SNNs on challenging benchmarks in the time domain, such as speech and gesture recognition. This also exceeds the performance of standard classical recurrent neural networks and approaches that of the best modern artificial neural networks. As these SNNs exhibit sparse spiking, we show that they are theoretically one to three orders of magnitude more computationally efficient compared to recurrent neural networks with similar performance. Together, this positions SNNs as an attractive solution for AI hardware implementations. The use of sparse signals in spiking neural networks, modelled on biological neurons, offers in principle a highly efficient approach for artificial neural networks when implemented on neuromorphic hardware, but new training approaches are needed to improve performance. Using a new type of activity-regularizing surrogate gradient for backpropagation combined with recurrent networks of tunable and adaptive spiking neurons, state-of-the-art performance for spiking neural networks is demonstrated on benchmarks in the time domain.
Perez-Nieves N., Leung V.C., Dragotti P.L., Goodman D.F.
Nature Communications scimago Q1 wos Q1 Open Access
2021-10-04 citations by CoLab: 146 PDF Abstract  
The brain is a hugely diverse, heterogeneous structure. Whether or not heterogeneity at the neural level plays a functional role remains unclear, and has been relatively little explored in models which are often highly homogeneous. We compared the performance of spiking neural networks trained to carry out tasks of real-world difficulty, with varying degrees of heterogeneity, and found that heterogeneity substantially improved task performance. Learning with heterogeneity was more stable and robust, particularly for tasks with a rich temporal structure. In addition, the distribution of neuronal parameters in the trained networks is similar to those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments. The authors show that heterogeneity in spiking neural networks improves accuracy and robustness of prediction for complex information processing tasks, results in optimal parameter distribution similar to experimental data and is metabolically efficient for learning tasks at varying timescales.
Fang W., Yu Z., Chen Y., Masquelier T., Huang T., Tian Y.
2021-10-01 citations by CoLab: 304 Abstract  
Spiking Neural Networks (SNNs) have attracted enormous research interest due to temporal information processing capability, low power consumption, and high biological plausibility. However, the formulation of efficient and high-performance learning algorithms for SNNs is still challenging. Most existing learning methods learn weights only, and require manual tuning of the membrane-related parameters that determine the dynamics of a single spiking neuron. These parameters are typically chosen to be the same for all neurons, which limits the diversity of neurons and thus the expressiveness of the resulting SNNs. In this paper, we take inspiration from the observation that membrane-related parameters are different across brain regions, and propose a training algorithm that is capable of learning not only the synaptic weights but also the membrane time constants of SNNs. We show that incorporating learnable membrane time constants can make the network less sensitive to initial values and can speed up learning. In addition, we reevaluate the pooling methods in SNNs and find that max-pooling will not lead to significant information loss and have the advantage of low computation cost and binary compatibility. We evaluate the proposed method for image classification tasks on both traditional static MNIST, Fashion-MNIST, CIFAR-10 datasets, and neuromorphic N-MNIST, CIFAR10-DVS, DVS128 Gesture datasets. The experiment results show that the proposed method outperforms the state-of-the-art accuracy on nearly all datasets, using fewer time-steps. Our codes are available at https://github.com/fangwei123456/Parametric-Leaky-Integrate-and-Fire-Spiking-Neuron.
Beniaguev D., Segev I., London M.
Neuron scimago Q1 wos Q1
2021-09-01 citations by CoLab: 149 Abstract  
Summary Utilizing recent advances in machine learning, we introduce a systematic approach to characterize neurons' input/output (I/O) mapping complexity. Deep neural networks (DNNs) were trained to faithfully replicate the I/O function of various biophysical models of cortical neurons at millisecond (spiking) resolution. A temporally convolutional DNN with five to eight layers was required to capture the I/O mapping of a realistic model of a layer 5 cortical pyramidal cell (L5PC). This DNN generalized well when presented with inputs widely outside the training distribution. When NMDA receptors were removed, a much simpler network (fully connected neural network with one hidden layer) was sufficient to fit the model. Analysis of the DNNs' weight matrices revealed that synaptic integration in dendritic branches could be conceptualized as pattern matching from a set of spatiotemporal templates. This study provides a unified characterization of the computational complexity of single neurons and suggests that cortical networks therefore have a unique architecture, potentially supporting their computational power.
Kag A., Saligrama V.
2021-06-01 citations by CoLab: 8 Abstract  
We propose a learning method that, dynamically modifies the time-constants of the continuous-time counterpart of a vanilla RNN. The time-constants are modified based on the current observation and hidden state. Our proposal overcomes the issues of RNN trainability, by mitigating exploding and vanishing gradient phenomena based on placing novel constraints on the parameter space, and by suppressing noise in inputs based on pondering over informative inputs to strengthen their contribution in the hidden state. As a result, our method is computationally efficient overcoming overheads of many existing methods that also attempt to improve RNN training. Our RNNs, despite being simpler and having light memory footprint, shows competitive performance against standard LSTMs and baseline RNN models on many benchmark datasets including those that require long-term memory.
Stuijt J., Sifalakis M., Yousefzadeh A., Corradi F.
Frontiers in Neuroscience scimago Q2 wos Q2 Open Access
2021-05-19 citations by CoLab: 69 PDF Abstract  
The development of brain-inspired neuromorphic computing architectures as a paradigm for Artificial Intelligence (AI) at the edge is a candidate solution that can meet strict energy and cost reduction constraints in the Internet of Things (IoT) application areas. Toward this goal, we present μBrain: the first digital yet fully event-driven without clock architecture, with co-located memory and processing capability that exploits event-based processing to reduce an always-on system's overall energy consumption (μW dynamic operation). The chip area in a 40 nm Complementary Metal Oxide Semiconductor (CMOS) digital technology is 2.82 mm2 including pads (without pads 1.42 mm2). This small area footprint enables μBrain integration in re-trainable sensor ICs to perform various signal processing tasks, such as data preprocessing, dimensionality reduction, feature selection, and application-specific inference. We present an instantiation of the μBrain architecture in a 40 nm CMOS digital chip and demonstrate its efficiency in a radar-based gesture classification with a power consumption of 70 μW and energy consumption of 340 nJ per classification. As a digital architecture, μBrain is fully synthesizable and lends to a fast development-to-deployment cycle in Application-Specific Integrated Circuits (ASIC). To the best of our knowledge, μBrain is the first tiny-scale digital, spike-based, fully parallel, non-Von-Neumann architecture (without schedules, clocks, nor state machines). For these reasons, μBrain is ultra-low-power and offers software-to-hardware fidelity. μBrain enables always-on neuromorphic computing in IoT sensor nodes that require running on battery power for years.
Hasani R., Lechner M., Amini A., Rus D., Grosu R.
2021-05-18 citations by CoLab: 104 Abstract  
We introduce a new class of time-continuous recurrent neural network models. Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems modulated via nonlinear interlinked gates. The resulting models represent dynamical systems with varying (i.e., liquid) time-constants coupled to their hidden state, with outputs being computed by numerical differential equation solvers. These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations, and give rise to improved performance on time-series prediction tasks. To demonstrate these properties, we first take a theoretical approach to find bounds over their dynamics, and compute their expressive power by the trajectory length measure in a latent trajectory space. We then conduct a series of time-series prediction experiments to manifest the approximation capability of Liquid Time-Constant Networks (LTCs) compared to classical and modern RNNs.
Zhang Y., Yin B., Gomony M.D., Corporaal H., Trinitis C., Corradi F.
2025-03-11 citations by CoLab: 0 PDF Abstract  
Edge devices execute pre-trained Artificial Intelligence (AI) models optimized on large Graphical Processing Units (GPUs); however, they frequently require fine-tuning when deployed in the real world. This fine-tuning, referred to as edge learning, is essential for personalized tasks such as speech and gesture recognition, which often necessitate the use of recurrent neural networks (RNNs). However, training RNNs on edge devices presents major challenges due to limited memory and computing resources. In this study, we propose a system for RNN training through sequence partitioning using the Forward Propagation Through Time (FPTT) training method, thereby enabling edge learning. Our optimized hardware/software co-design for FPTT represents a novel contribution in this domain. This research demonstrates the viability of FPTT for fine-tuning real-world applications by implementing a complete computational framework for training Long Short-Term Memory (LSTM) networks utilizing FPTT. Moreover, this work incorporates the optimization and exploration of a scalable digital hardware architecture using an open-source hardware-design framework, named Chipyard and its implementation on a Field-Programmable Gate Array (FPGA) for cycle-accurate verification. The empirical results demonstrate that partitioned training on the proposed architecture enables an 8.2-fold reduction in memory usage with only a 0.2× increase in latency for small-batch sequential MNIST (S-MNIST) compared to traditional non-partitioned training.
Li X., Chen X., Guo R., Wu Y., Zhou Z., Yu F., Lu H.
2025-03-01 citations by CoLab: 0
Wang T., Shen Q., Li X., Zhang Y., Wang Z., Yan C.
Applied Intelligence scimago Q2 wos Q2
2025-02-19 citations by CoLab: 0
Herbozo Contreras L.F., Yu L., Huang Z., Nikpour A., Kavehei O.
2025-02-12 citations by CoLab: 0 Abstract  
AbstractEpilepsy is a significant global health issue, requiring dependable diagnostic tools like scalp encephalogram (scalp-EEG), sub-scalp EEG, and intracranial EEG (iEEG) for precise seizure detection and treatment. AI has emerged as a powerful tool in this domain, offering the potential for real-time, responsive monitoring. Traditional methods often rely on feature extraction techniques like Short-Time Fourier Transform (STFT), which can increase power consumption, making them less suitable for deployment on edge devices. While large models can improve accuracy without STFT, their size also limits their practicality for edge applications. This study introduces Liquid-Dendrite, a novel bio-inspired model for seizure detection, leveraging Liquid-Time Constant Spiking Neurons (LTC-SN) and dendrites spiking neurons (dSN) with heterogeneous time-constants. The model comprises two hidden layers with dendritic neurons and one layer of liquid-time constant networks. Our model achieves a memory efficacy of 535 KB with 130 K trainable parameters. The model was tested across the most noteworthy epilepsy datasets for scalp EEG (TUH and CHB-MIT) and iEEG (EPILEPSIAE). Our model demonstrated commendable performance, achieving AUROC scores of 83%, 96%, and 93%, respectively, outperforming some existing models in an energy and memory-efficient way. Moreover, we conducted a robustness test by blacking out EEG channels at the inference stage, where we showed the ability of our network to work with fewer channels. We could deploy our tiny model and perform inference at the edge of the Raspberry Pi 5 without the need for additional quantization. This highlights the potential of Neuro-Inspired AI for efficient, small-scale, and energy-embedded AI systems across different brain modalities.
Mehmood A., Ilyas A., Ilyas H.
Neuroinformatics scimago Q1 wos Q3
2025-02-01 citations by CoLab: 0 Abstract  
The bidirectional interactions between brain and heart through autonomic nervous system is the prime focus of neuro-cardiology community. The computer models designed to analyze brain and heart signals are either complex in terms of molecular and cellular interactions or not capable of representing the complex ion channel dynamics. Therefore, scientists are unable to extract the overall behavior of organs by electrical response of heterogeneous cells of brain and heart. In this study, a unified model of excitable cells is proposed that can be modulated by adrenergic features. By implementing the proposed model, a network of one thousand sparsely coupled cardio-neural network is simulated. The major findings of study include i. cardiac heterogeneity in electrical behavior of cardiac myocytes is the prime factor of heart rate variability ii. Brain-heart interplay through electrical pulses holds the necessary information of brain and heart signals that can be analyzed through spiking neural networks iii. Heart rate variability can be predicted and monitored by spiking neural networks from electrophysiological recordings of brain and heart iv. Heart rate variability related to tachycardia and bradycardia depends upon the polarization protocols of cardiac myocytes during plateau phase of action potential. This study provides the modeling and simulation phase of brain-heart interface to predict the morbidity at early stages. The recent advancements in nano-electronics will make is possible to develop brain-heart interface as nano-chip to deploy in subject to stimulate the brain-heart interplay through electrophysiological signals.
Li D., Huang S., Wen G., Zhang Z.
Cognitive Computation scimago Q1 wos Q1
2025-01-14 citations by CoLab: 0 Abstract  
The human brain comprises distinct regions, each with specific functions. Interconnected through neural pathways, the brain regions collaborate to process complex information. Similarly, ensemble learning enhances pattern classification by leveraging the collaboration and complementarity between classifiers. The similarity between the two suggests that simulating the brain’s functional network holds the potential for groundbreaking advancements in the design of ensemble learning algorithms. Motivated by this, our paper proposes a brain-inspired ensemble pruning method called BrainEnsemble. This method provides an example of using classifier combinations to emulate the functions of brain regions. Guided by the principles of curriculum learning and the divide-and-conquer strategy, each artificial brain region can specialize in specific functions and tasks. Additionally, BrainEnsemble simulates the brain regions’ responses and connectivity mechanisms through graph connections. In this model, different artificial brain regions can dynamically reorganize and adjust their interactions to adapt to continuously changing environments or data distributions, enabling the model to maintain high performance when confronted with new data. Extensive experimental results demonstrate the superior performance of BrainEnsemble. In summary, drawing inspiration from the information processing mechanism of the human brain can provide new ideas for the design of ensemble learning algorithms, and more research can be conducted in this direction in the future.
Wang S., Wang Z., Li C., Qi X., So H.K.
IEEE Access scimago Q1 wos Q2 Open Access
2025-01-01 citations by CoLab: 0
Schegolev Andrey E., Bastrakova Marina V., Sergeev Michael A., Maksimovskaya Anastasia A., Klenov Nikolay V., Soloviev Igor
2024-12-05 citations by CoLab: 0 PDF Abstract  
The extensive development of the field of spiking neural networks has led to many areas of research that have a direct impact on people’s lives. As the most bio-similar of all neural networks, spiking neural networks not only allow for the solution of recognition and clustering problems (including dynamics), but they also contribute to the growing understanding of the human nervous system. Our analysis has shown that hardware implementation is of great importance, since the specifics of the physical processes in the network cells affect their ability to simulate the neural activity of living neural tissue, the efficiency of certain stages of information processing, storage and transmission. This survey reviews existing hardware neuromorphic implementations of bio-inspired spiking networks in the ”semiconductor”, ”superconductor”, and ”optical” domains. Special attention is given to the potentials for effective ”hybrids” of different approaches.
Liang Z., Fang X., Liang Z., Xiong J., Deng F., Nyamasvisva T.E.
iScience scimago Q1 wos Q1 Open Access
2024-11-01 citations by CoLab: 0
Cao Z., Li M., Wang X., Wang H., Wang F., Li Y., Huang Z.
2024-10-31 citations by CoLab: 0 Abstract  
Spiking neural networks (SNNs) are a novel type of bio-plausible neural network with energy efficiency. However, SNNs are non-differentiable and the training memory costs increase with the number of simulation steps. To address these challenges, this work introduces an implicit training method for SNNs inspired by equilibrium models. Our method relies on the multi-parallel implicit stream architecture (MPIS-SNNs). In the forward process, MPIS-SNNs drive multiple fused parallel implicit streams (ISs) to reach equilibrium state simultaneously. In the backward process, MPIS-SNNs solely rely on a single-time-step simulation of SNNs, avoiding the storage of a large number of activations. Extensive experiments on N-MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100 demonstrate that MPIS-SNNs exhibit excellent characteristics such as low latency, low memory cost, low firing rates, and fast convergence speed, and are competitive among the latest efficient efficient training methods for SNNs. Our code is available at an anonymized GitHub repository: https://github.com/kiritozc/MPIS-SNNs .
Herbozo Contreras L.F., Duy Truong N., Eshraghian J.K., Xu Z., Huang Z., Bersani-Vincenzo T., Aguilar I., Hang Leung W., Nikpour A., Kavehei O.
PNAS Nexus wos Q1 Open Access
2024-10-30 citations by CoLab: 1 PDF Abstract  
Abstract Neuromodulation techniques have emerged as promising approaches for treating a wide range of neurological disorders, precisely delivering electrical stimulation to modulate abnormal neuronal activity. While leveraging the unique capabilities of artificial intelligence (AI) holds immense potential for responsive neurostimulation, it appears as an extremely challenging proposition where real-time (low-latency) processing, low power consumption, and heat constraints are limiting factors. The use of sophisticated AI-driven models for personalized neurostimulation depends on the back-telemetry of data to external systems (e.g. cloud-based medical mesosystems and ecosystems). While this can be a solution, integrating continuous learning within implantable neuromodulation devices for several applications, such as seizure prediction in epilepsy, is an open question. We believe neuromorphic architectures hold an outstanding potential to open new avenues for sophisticated on-chip analysis of neural signals and AI-driven personalized treatments. With more than three orders of magnitude reduction in the total data required for data processing and feature extraction, the high power- and memory-efficiency of neuromorphic computing to hardware-firmware co-design can be considered as the solution-in-the-making to resource-constraint implantable neuromodulation systems. This perspective introduces the concept of Neuromorphic Neuromodulation, a new breed of closed-loop responsive feedback system. It highlights its potential to revolutionize implantable brain-machine microsystems for patient-specific treatment.
Heidarian M., Karimi G., Payandeh M.
2024-10-01 citations by CoLab: 0 Abstract  
This paper presents an effective learning multi-spike deep spiking neural network with temporal feedback backpropagation for breast cancer detection using contrast-enhanced MRI images. The learning in the spiking network is a new universal temporal feedback called Temporal_Feedback_SpikeProp (TF_SpikeProp) and Temporal_Feedback_ReSuMe (TF_ReSuMe). Thus, it can be implemented on all kinds of algorithms such as MultiSpikeProp and MultiReSuMe algorithms and is compatible with all temporal codings. The presented spiking network is a functional network with high accuracy and convergence in low epochs. The new algorithm explores the influence of all presynaptic neurons on the output error and reduces the role of inactive neurons. Therefore, the error propagation of the spiking network is such that it affects not only the spike time of each neuron in the output spike time, but also the refractory time and the presynaptic spikes of the neuron during the spiking time of the output neuron. The spiking network can correct and adjust the weights, spike delay, spike threshold, and also the time constant of the spike collection kernel. In order to diagnose cancer tissue, time–frequency features such as STFT and packet wavelet transform (WPT) have been used along with texture recognition features such as co-occurrence matrix. The presented network achieved an accuracy of 98.3% in 23 epochs in SpikeProp algorithm, 97.4% in ReSuMe with Duke MRI dataset, 98.64% on MNIST, and 96.1% on Iris. The results have shown that the training algorithm used in this study can achieve high accuracy in low epochs compared to traditional algorithms, while solving the challenge of getting stuck in local minima and poor convergence, as well as reducing the problem of time confusion of output spikes to an acceptable level and providing a fully functional network in practice and reality.
Wu Y., Shi B., Zheng Z., Zheng H., Yu F., Liu X., Luo G., Deng L.
Nature Communications scimago Q1 wos Q1 Open Access
2024-08-27 citations by CoLab: 1 PDF Abstract  
Processing spatiotemporal data sources with both high spatial dimension and rich temporal information is a ubiquitous need in machine intelligence. Recurrent neural networks in the machine learning domain and bio-inspired spiking neural networks in the neuromorphic computing domain are two promising candidate models for dealing with spatiotemporal data via extrinsic dynamics and intrinsic dynamics, respectively. Nevertheless, these networks have disparate modeling paradigms, which leads to different performance results, making it hard for them to cover diverse data sources and performance requirements in practice. Constructing a unified modeling framework that can effectively and adaptively process variable spatiotemporal data in different situations remains quite challenging. In this work, we propose hybrid spatiotemporal neural networks created by combining the recurrent neural networks and spiking neural networks under a unified surrogate gradient learning framework and a Hessian-aware neuron selection method. By flexibly tuning the ratio between two types of neurons, the hybrid model demonstrates better adaptive ability in balancing different performance metrics, including accuracy, robustness, and efficiency on several typical benchmarks, and generally outperforms conventional single-paradigm recurrent neural networks and spiking neural networks. Furthermore, we evidence the great potential of the proposed network with a robotic task in varying environments. With our proof of concept, the proposed hybrid model provides a generic modeling route to process spatiotemporal data sources in the open world. Machine learning and neuromorphic computing network models have distinct strengths in processing spatiotemporal data. Here, authors propose hybrid spatiotemporal neural networks that combine these models, achieving better accuracy, robustness, and efficiency in varied environments across various benchmarks and real-world tasks.

Top-30

Journals

1
2
3
1
2
3

Publishers

1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
  • We do not take into account publications without a DOI.
  • Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex | MLA
Found error?