<sec><title>Introduction</title><p>In recent years, the application of deep learning models at the edge has gained attention. Typically, artificial neural networks (ANNs) are trained on graphics processing units (GPUs) and optimized for efficient execution on edge devices. Training ANNs directly at the edge is the next step with many applications such as the adaptation of models to specific situations like changes in environmental settings or optimization for individuals, e.g., optimization for speakers for speech processing. Also, local training can preserve privacy. Over the last few years, many algorithms have been developed to reduce memory footprint and computation.</p></sec><sec><title>Methods</title><p>A specific challenge to train recurrent neural networks (RNNs) for processing sequential data is the need for the Back Propagation Through Time (BPTT) algorithm to store the network state of all time steps. This limitation is resolved by the biologically-inspired E-prop approach for training Spiking Recurrent Neural Networks (SRNNs). We implement the E-prop algorithm on a prototype of the SpiNNaker 2 neuromorphic system. A parallelization strategy is developed to split and train networks on the ARM cores of SpiNNaker 2 to make efficient use of both memory and compute resources. We trained an SRNN from scratch on SpiNNaker 2 in real-time on the Google Speech Command dataset for keyword spotting.</p></sec><sec><title>Result</title><p>We achieved an accuracy of 91.12% while requiring only 680 KB of memory for training the network with 25 K weights. Compared to other spiking neural networks with equal or better accuracy, our work is significantly more memory-efficient.</p></sec><sec><title>Discussion</title><p>In addition, we performed a memory and time profiling of the E-prop algorithm. This is used on the one hand to discuss whether E-prop or BPTT is better suited for training a model at the edge and on the other hand to explore architecture modifications to SpiNNaker 2 to speed up online learning. Finally, energy estimations predict that the SRNN can be trained on SpiNNaker2 with 12 times less energy than using a NVIDIA V100 GPU.</p></sec>
%0 Journal Article
%1 10.3389/fnins.2022.1018006
%A Rostami, Amirhossein
%A Vogginger, Bernhard
%A Yan, Yexin
%A Mayr, Christian G.
%D 2022
%J Frontiers in Neuroscience
%K topic_neuroinspired imported
%R 10.3389/fnins.2022.1018006
%T E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware
%U https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2022.1018006
%V 16
%X <sec><title>Introduction</title><p>In recent years, the application of deep learning models at the edge has gained attention. Typically, artificial neural networks (ANNs) are trained on graphics processing units (GPUs) and optimized for efficient execution on edge devices. Training ANNs directly at the edge is the next step with many applications such as the adaptation of models to specific situations like changes in environmental settings or optimization for individuals, e.g., optimization for speakers for speech processing. Also, local training can preserve privacy. Over the last few years, many algorithms have been developed to reduce memory footprint and computation.</p></sec><sec><title>Methods</title><p>A specific challenge to train recurrent neural networks (RNNs) for processing sequential data is the need for the Back Propagation Through Time (BPTT) algorithm to store the network state of all time steps. This limitation is resolved by the biologically-inspired E-prop approach for training Spiking Recurrent Neural Networks (SRNNs). We implement the E-prop algorithm on a prototype of the SpiNNaker 2 neuromorphic system. A parallelization strategy is developed to split and train networks on the ARM cores of SpiNNaker 2 to make efficient use of both memory and compute resources. We trained an SRNN from scratch on SpiNNaker 2 in real-time on the Google Speech Command dataset for keyword spotting.</p></sec><sec><title>Result</title><p>We achieved an accuracy of 91.12% while requiring only 680 KB of memory for training the network with 25 K weights. Compared to other spiking neural networks with equal or better accuracy, our work is significantly more memory-efficient.</p></sec><sec><title>Discussion</title><p>In addition, we performed a memory and time profiling of the E-prop algorithm. This is used on the one hand to discuss whether E-prop or BPTT is better suited for training a model at the edge and on the other hand to explore architecture modifications to SpiNNaker 2 to speed up online learning. Finally, energy estimations predict that the SRNN can be trained on SpiNNaker2 with 12 times less energy than using a NVIDIA V100 GPU.</p></sec>
@article{10.3389/fnins.2022.1018006,
abstract = {<sec><title>Introduction</title><p>In recent years, the application of deep learning models at the edge has gained attention. Typically, artificial neural networks (ANNs) are trained on graphics processing units (GPUs) and optimized for efficient execution on edge devices. Training ANNs directly at the edge is the next step with many applications such as the adaptation of models to specific situations like changes in environmental settings or optimization for individuals, e.g., optimization for speakers for speech processing. Also, local training can preserve privacy. Over the last few years, many algorithms have been developed to reduce memory footprint and computation.</p></sec><sec><title>Methods</title><p>A specific challenge to train recurrent neural networks (RNNs) for processing sequential data is the need for the Back Propagation Through Time (BPTT) algorithm to store the network state of all time steps. This limitation is resolved by the biologically-inspired E-prop approach for training Spiking Recurrent Neural Networks (SRNNs). We implement the E-prop algorithm on a prototype of the SpiNNaker 2 neuromorphic system. A parallelization strategy is developed to split and train networks on the ARM cores of SpiNNaker 2 to make efficient use of both memory and compute resources. We trained an SRNN from scratch on SpiNNaker 2 in real-time on the Google Speech Command dataset for keyword spotting.</p></sec><sec><title>Result</title><p>We achieved an accuracy of 91.12% while requiring only 680 KB of memory for training the network with 25 K weights. Compared to other spiking neural networks with equal or better accuracy, our work is significantly more memory-efficient.</p></sec><sec><title>Discussion</title><p>In addition, we performed a memory and time profiling of the E-prop algorithm. This is used on the one hand to discuss whether E-prop or BPTT is better suited for training a model at the edge and on the other hand to explore architecture modifications to SpiNNaker 2 to speed up online learning. Finally, energy estimations predict that the SRNN can be trained on SpiNNaker2 with 12 times less energy than using a NVIDIA V100 GPU.</p></sec>},
added-at = {2024-10-02T10:38:17.000+0200},
author = {Rostami, Amirhossein and Vogginger, Bernhard and Yan, Yexin and Mayr, Christian G.},
biburl = {https://puma.scadsai.uni-leipzig.de/bibtex/2f739e8fda407a6cffe26fe3229d0f0f5/scadsfct},
doi = {10.3389/fnins.2022.1018006},
interhash = {d5718f139280e299f87190a3e31e3a17},
intrahash = {f739e8fda407a6cffe26fe3229d0f0f5},
issn = {1662-453X},
journal = {Frontiers in Neuroscience},
keywords = {topic_neuroinspired imported},
timestamp = {2024-11-22T15:49:35.000+0100},
title = {E-prop on SpiNNaker 2: Exploring online learning in spiking RNNs on neuromorphic hardware},
url = {https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2022.1018006},
volume = 16,
year = 2022
}