Robotic exploration problems arise in various contexts, ranging from search and rescue missions to underwater and space exploration. In these domains and beyond, exploration algorithms that can rapidly reduce uncertainty can provide signiﬁcant beneﬁts, for instance, by shortening time and reducing resources required for exploration. Unfortunately, principled algorithms based on rigorous information-theoretic metrics, such as, maximizing Shannon mutual information along the exploration path, are computationally extremely demanding.
In this project, I proposed a novel multi-core accelerator that computes Shannon mutual information for the entire occupancy grid map in real-time and illustrated that the throughput of the such hardware is dictated by its memory architecture and data delivery method. In other words, I found that parallelization alone is not sufﬁcient for high-throughput computation. In addition, it is critical to consider (i) memory management, e.g., how data is placed and organized in memory, (ii) data delivery, e.g., how data is accessed and delivered to parallel cores, so that throughput scales well with increasing parallelization.
We argue that the effective co-design of computing hardware and algorithms for robotics applications will be enabled by novel methods in data ﬂow on chip, for instance, rather than counting the number of operations or amount of memory required that has been essential to developing robotics algorithms for CPUs.