Is this project an undergraduate, graduate, or faculty project?

Undergraduate

Project Type

group

Campus

Daytona Beach

Authors' Class Standing

Riley Flanagan, Senior Michael Fornito, Senior Tania Rivas, Junior

Lead Presenter's Name

Riley Flanagan

Faculty Mentor Name

Christine Walck

Loading...

Media is loading
 

Abstract

Adoption of Augmented and Virtual Reality (AR and VR) interfaces in the aerospace and defense fields has been inhibited by conspicuous and cumbersome input mechanisms such as gestures and spoken voice recognition. Silent speech interfaces using non-invasive electromyography (EMG) sensors are posited as a means for controlling AR and VR interfaces with potential for inconspicuous and high bandwidth input. Our objective is to develop a silent speech interface that receives input from subvocalizations via skin surface EMG sensors, which is then decoded into commands for controlling a heads-up-display built on a Microsoft HoloLens. EMG sensors are placed on the Digastric, Stylohyoid, Sternohyoid, and Cricothyroid muscles located on the anterior cervical region. The collected data is used to train a convolutional neural network that functions as a classifier, determining the subject’s subvocal input against a word library. The user will equip the wearable interface and use it to silently send commands through subvocalizations to control an AR device. Effectiveness of the wearable interface will be defined by word recognition accuracies in mouthed trials using the current command library. Future work includes expanding the dataset used to train the recognition model and live demonstration in controlling an augmented reality interface.

Did this research project receive funding support (Spark, SURF, Research Abroad, Student Internal Grants, Collaborative, Climbing, or Ignite Grants) from the Office of Undergraduate Research?

Yes, Student Internal Grants

Share

COinS
 

Development of a Silent Speech Interface for Augmented Reality Applications

Adoption of Augmented and Virtual Reality (AR and VR) interfaces in the aerospace and defense fields has been inhibited by conspicuous and cumbersome input mechanisms such as gestures and spoken voice recognition. Silent speech interfaces using non-invasive electromyography (EMG) sensors are posited as a means for controlling AR and VR interfaces with potential for inconspicuous and high bandwidth input. Our objective is to develop a silent speech interface that receives input from subvocalizations via skin surface EMG sensors, which is then decoded into commands for controlling a heads-up-display built on a Microsoft HoloLens. EMG sensors are placed on the Digastric, Stylohyoid, Sternohyoid, and Cricothyroid muscles located on the anterior cervical region. The collected data is used to train a convolutional neural network that functions as a classifier, determining the subject’s subvocal input against a word library. The user will equip the wearable interface and use it to silently send commands through subvocalizations to control an AR device. Effectiveness of the wearable interface will be defined by word recognition accuracies in mouthed trials using the current command library. Future work includes expanding the dataset used to train the recognition model and live demonstration in controlling an augmented reality interface.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.