Exploration problems are fundamental to robotics, arising in various domains, ranging from search and rescue to space exploration. Many effective exploration algorithms rely on the computation of mutual information between the current map and potential future measurements in order to make planning decisions. Unfortunately, computing mutual information metrics is computationally challenging. In fact, a large fraction of the current literature focuses on approximation techniques to devise computationally-efﬁcient algorithms. In this paper, we propose a novel computing hardware architecture to efﬁciently compute Shannon mutual information. The proposed architecture consists of multiple mutual information computation cores, each evaluating the mutual information between a single sensor beam and the occupancy grid map. The key challenge is to ensure that each core is supplied with data when requested, so that all cores are maximally utilized. Our key contribution consists of a novel memory architecture and data delivery method that ensures effective utilization of all mutual information computation cores. This architecture was optimized for 16 mutual information computation cores, and was implemented on an FPGA. We show that it computes the mutual information metric for an entire map of 20m × 20m at 0.1m resolution in near real time, at 2 frames per second, which is approximately two orders of magnitude faster, while consuming an order of magnitude less power, when compared to an equivalent implementation on a Xeon CPU.
A wearable-optimized implementation of a sleep stage classification algorithm that has low detection latency, high detection accuracy and low resource consumption is developed and successfully implemented on a low-power FPGA microsystem for closed-loop electrical brain stimulation. This implementation uses EEG and EMG signals as inputs and classifies stages of sleep. By structurally merging multichannel FIR and window averaging filters into one reconfigurable, multipurpose filter, the new implementation maintains a sleep detection accuracy of 79.7%, a REM detection sensitivity of 98.2%, a REM detection specificity of 89.2% and a detection latency of 0.982 ms, while consuming 6.8 times fewer logic elements and 96.28% less power compared with the current state of the art implementation. With its high performance and low resource usage, this implementation enables a low-power wearable microsystem to perform neural recording, real-time REM sleep stage detection, and closed-loop responsive brain stimulation as a tool to study the mechanisms of neurodegenerative diseases.