Intra-cortical brain machine interfaces (iBMIs) with wireless capability could scale the number of recording channels by integrating an intention decoder to reduce data rates. However, the need for frequent retraining due to neural signal non-stationarity is a big impediment. This paper presents an alternate neuromorphic paradigm of online reinforcement learning (RL) with a binary evaluative feedback in iBMIs to tackle this issue. This paradigm eliminates time-consuming calibration procedures. Instead, it relies on updating the model on a sequential sample-by-sample basis based on an instantaneous evaluative binary feedback signal. Such online learning is a hallmark of neuromorphic systems and is different from batch updates of weight in popular deep networks that is very resource consuming and incompatible with constraints of an implant. In this work, using open-loop analysis on pre-recorded data, we show application of a simple RL algorithm—Banditron in discrete-state iBMIs and compare it against previously reported state of the art RL algorithms—Hebbian RL (HRL), Attention Gated RL (AGREL), deep Q-learning. Owing to its simplistic single-layer architecture, Banditron is estimated to be at least two orders of magnitude of reduction in power dissipation compared to state of the art RL algorithms. At the same time, offline analysis performed on four pre-recorded experimental datasets procured from the motor cortex of two non-human primates performing joystick-based movement-related tasks indicate Banditron performing significantly better than state of the art RL algorithms by at least ∼5%, 10%, 7% and 7% in experiments 1, 2, 3 and 4 respectively. Furthermore, we propose a non-linear variant of Banditron—“Banditron-RP”, which gives an average improvement of 6% and 2% in decoding accuracy in experiments 2 and 4 respectively with only a moderate increase in computations (and concomitantly power consumption).
brain-machine interface; neuromorphic; reinforcement learning; hardware-friendly