Wearable Silent Speech Interface for Augmented Reality Applications

Author #1
Author #2
Author #3

Abstract

Modern human-computer interfaces such as acoustic speech recognition and hand gestures represent a major bandwidth bottleneck in the adoption of Augmented and Virtual Reality (AR and VR) hardware. Silent speech interfaces have emerged as a technology that has potential for fluidity in input while facilitating a high bandwidth of information transfer through non-invasive skin surface EMG (Electromyograph) electrodes. The objective of the team is to develop a silent speech interface that receives input from subvocalizations via skin surface EMG electrodes and decodes this input into commands to interact with a HUD or similar computer system. Collected EMG sensor data from the greater surface of the neck is used to train a neural network that functions as a classifier to determine the subject’s subvocal input against a word library. The user will equip the wearable interface and use it to silently send commands through subvocalizations to an Augmented Reality device. Comparisons from trials show mouthed and subvocalized signals result in a greater signal strength and accuracy than un-mouthed subvocalized signals. Primary sensor placements for strongest recorded muscle activation strengths are detailed for efficient data set acquisition to maximize input accuracy.

 

Wearable Silent Speech Interface for Augmented Reality Applications

Modern human-computer interfaces such as acoustic speech recognition and hand gestures represent a major bandwidth bottleneck in the adoption of Augmented and Virtual Reality (AR and VR) hardware. Silent speech interfaces have emerged as a technology that has potential for fluidity in input while facilitating a high bandwidth of information transfer through non-invasive skin surface EMG (Electromyograph) electrodes. The objective of the team is to develop a silent speech interface that receives input from subvocalizations via skin surface EMG electrodes and decodes this input into commands to interact with a HUD or similar computer system. Collected EMG sensor data from the greater surface of the neck is used to train a neural network that functions as a classifier to determine the subject’s subvocal input against a word library. The user will equip the wearable interface and use it to silently send commands through subvocalizations to an Augmented Reality device. Comparisons from trials show mouthed and subvocalized signals result in a greater signal strength and accuracy than un-mouthed subvocalized signals. Primary sensor placements for strongest recorded muscle activation strengths are detailed for efficient data set acquisition to maximize input accuracy.

blog comments powered by Disqus