Skip to content Skip to footer

Streamlined Ongoing Education for Pulse-Based Neural Networks Utilizing Time-Domain Compaction

AI integration into low-powered Internet of Things (IoT) devices such as microcontrollers has been enabled by advances in hardware and software. Holding back deployment of complex Artificial Neural Networks (ANNs) to these devices are constraints such as the need for techniques such as quantization and pruning. Shifts in data distribution between training and operational environments and the need for AI to adapt to individual users while ensuring privacy and reducing internet connectivity pose challenges. Edge AI models may therefore result in errors.

Continuous learning (CL), otherwise known as the capacity to keep learning from new situations while also storing previously acquired information, has emerged in response to these challenges. CL focuses on perpetually teaching learners with fresh data to reduce the chance of forgetting. ANN models, including popular CNN models, necessitate large amounts of storage on-device for learning data, hence accelerating possible forgetting. This comes with a potential trade-off in accuracy in situations where rehearsal-free approaches are applied. However, storing required samples on the device may not always be possible.

Up for consideration are the increasingly popular Spiking Neural Networks (SNNs) which facilitate energy-efficient time series processing and implement pretty accurate results. SNNs have been designed to mimic the activities of organic neurons and as such, are able to exchange information in spikes – rapid changes in a neuron’s membrane potential. This action allows the generation of 1-bit data in digital frameworks. Having conducted studies into software and online learning hardware SNNs, researchers have discovered its potential for developing CL solutions. However, investigation into CL techniques using rehearsal-free approaches in SNNs is still limited.

Developing a solution to this are a team of researchers from the University of Bologna, ETH Zurich, and Politecnico di Torino. They introduce a rehearsal-based CL model for SNNs that is not only memory efficient but compatible with devices that have limited resources. Their solution implements a Latent Replay (LR) method that stores a sub-selection of past experiences which is then used to train the network when encountering new tasks. This LR method has proven to provide state-of-the-art classification accuracy with CNNs. To reduce rehearsal memory, the researchers have applied a lossy compression on the time axis using the resilient information encoding of SNNs to accuracy reduction. Their solution boasts robustness and impressive efficiency, achieving a top-1 accuracy rate of 92.46% in the Sample-Incremental setup and 92% in the Class-Incremental setup. The researchers’ method also decreased the memory needs for rehearsal data by 140 times with only a 4% loss in accuracy. The team’s solutions provide a foundation for further innovations in accurate and power-efficient CL on the Edge.

Leave a comment

0.0/5