Adoption of Augmented and Virtual Reality (AR and VR) interfaces in the aerospace and defense fields has been inhibited by conspicuous and cumbersome input mechanisms such as gestures and spoken voice..
Adoption of Augmented and Virtual Reality (AR and VR) interfaces in the aerospace and defense fields has been inhibited by conspicuous and cumbersome input mechanisms such as gestures and spoken voice recognition. Silent speech interfaces using non-invasive electromyography (EMG) sensors are posited as a means for controlling AR and VR interfaces with potential for inconspicuous and high bandwidth input. Our objective is to develop a silent speech interface that receives input from subvocalizations via skin surface EMG sensors, which is then decoded into commands for controlling a heads-up-display built on a Microsoft HoloLens. EMG sensors are placed on the Digastric, Stylohyoid, Sternohyoid, and Cricothyroid muscles located on the anterior cervical region. The collected data is used to train a convolutional neural network that functions as a classifier, determining the subject’s subvocal input against a word library. The user will equip the wearable interface and use it to silently send commands through subvocalizations to control an AR device. Effectiveness of the wearable interface will be defined by word recognition accuracies in mouthed trials using the current command library. Future work includes expanding the dataset used to train the recognition model and live demonstration in controlling an augmented reality interface.