Wearable Silent Speech Interface for Augmented Reality Applications

Is this project an undergraduate, graduate, or faculty project?

Undergraduate

Loading...

Media is loading
 

group

What campus are you from?

Daytona Beach

Authors' Class Standing

Riley Flanagan, Senior Levi Lingsch, Senior Michael Fornito, Senior

Lead Presenter's Name

Riley Flanagan

Faculty Mentor Name

Christine Walck

Abstract

Modern human-computer interfaces such as acoustic speech recognition and hand gestures represent a major bandwidth bottleneck in the adoption of Augmented and Virtual Reality (AR and VR) hardware. Silent speech interfaces have emerged as a technology that has potential for fluidity in input while facilitating a high bandwidth of information transfer through non-invasive skin surface EMG (Electromyograph) electrodes. The objective of the team is to develop a silent speech interface that receives input from subvocalizations via skin surface EMG electrodes and decodes this input into commands to interact with a HUD or similar computer system. Collected EMG sensor data from the greater surface of the neck is used to train a neural network that functions as a classifier to determine the subject’s subvocal input against a word library. The user will equip the wearable interface and use it to silently send commands through subvocalizations to an Augmented Reality device. Comparisons from trials show mouthed and subvocalized signals result in a greater signal strength and accuracy than un-mouthed subvocalized signals. Primary sensor placements for strongest recorded muscle activation strengths are detailed for efficient data set acquisition to maximize input accuracy.

Did this research project receive funding support from the Office of Undergraduate Research.

Yes, Student Internal Grant

Share

COinS
 

Wearable Silent Speech Interface for Augmented Reality Applications

Modern human-computer interfaces such as acoustic speech recognition and hand gestures represent a major bandwidth bottleneck in the adoption of Augmented and Virtual Reality (AR and VR) hardware. Silent speech interfaces have emerged as a technology that has potential for fluidity in input while facilitating a high bandwidth of information transfer through non-invasive skin surface EMG (Electromyograph) electrodes. The objective of the team is to develop a silent speech interface that receives input from subvocalizations via skin surface EMG electrodes and decodes this input into commands to interact with a HUD or similar computer system. Collected EMG sensor data from the greater surface of the neck is used to train a neural network that functions as a classifier to determine the subject’s subvocal input against a word library. The user will equip the wearable interface and use it to silently send commands through subvocalizations to an Augmented Reality device. Comparisons from trials show mouthed and subvocalized signals result in a greater signal strength and accuracy than un-mouthed subvocalized signals. Primary sensor placements for strongest recorded muscle activation strengths are detailed for efficient data set acquisition to maximize input accuracy.