Is this project an undergraduate, graduate, or faculty project?

Undergraduate

group

Poster Session; 10-minute Oral Presentation

Authors' Class Standing

Charles Pollock - Senior Jeremiah Lantzer - Senior Marcus Isnard - Senior Abdullah Alosaimi - Senior

Lead Presenter's Name

Jeremiah Lantzer

Faculty Mentor Name

Dr. Eduardo Divo

Abstract

Developing heterogeneous vehicle operations is the next step in progressing how technology can be utilized. Vehicles that operate in similar environments have an easier time working together autonomously rather than vehicles of different environments. This project focuses on utilizing the unique abilities of two vehicles with different operating environments and merging them into one system controlled by one user. The system uses one ground vehicle and one quadcopter which work in tandem. Spatial awareness and maneuverability of the user is achieved by utilizing a Heads-Up Display along with EMG sensors to recognize hand gestures. By using EMG sensors, a physical controller is no longer needed, allowing the user to keep their hands free should they need to complete other tasks. Upon completion of this project in the coming weeks, each vehicle will have the ability to be independently controlled by the user with sensor information sent back to the Heads-Up Display. The system will also allow both vehicles to work in tandem by having the user control a single vehicle while the second vehicle reacts to the movements of the first vehicle.

Did this research project receive funding support (Spark or Ignite Grants) from the Office of Undergraduate Research?

Yes, Ignite Grant

Share

COinS
 

Cross Platform Training via Augmented Reality and Neuromuscular Control Systems

Developing heterogeneous vehicle operations is the next step in progressing how technology can be utilized. Vehicles that operate in similar environments have an easier time working together autonomously rather than vehicles of different environments. This project focuses on utilizing the unique abilities of two vehicles with different operating environments and merging them into one system controlled by one user. The system uses one ground vehicle and one quadcopter which work in tandem. Spatial awareness and maneuverability of the user is achieved by utilizing a Heads-Up Display along with EMG sensors to recognize hand gestures. By using EMG sensors, a physical controller is no longer needed, allowing the user to keep their hands free should they need to complete other tasks. Upon completion of this project in the coming weeks, each vehicle will have the ability to be independently controlled by the user with sensor information sent back to the Heads-Up Display. The system will also allow both vehicles to work in tandem by having the user control a single vehicle while the second vehicle reacts to the movements of the first vehicle.

 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.