This research group specializes in developing alternative methods for human-machine interaction as applied to device control and human performance augmentation.
Extension of the Human Senses
The overarching goal of this project is to increase safety, efficiency, and reliability of human-machine systems by applying advanced sensing, human modeling, and adaptive automation technologies. We aim to develop multimodal interfaces, which allow humans to interact with automated systems in a natural manner more closely approximating human communication, and to provide automated systems with greater insights into user intentions. Because the technology will not require traditional keyboard or joystick-like controls, it will also provide redundant channels for control during emergencies. Another benefit of the sensing technology will be the provision of a neurobehavioral data stream for automated crew health monitoring.
The primary research objective of the Extension of the Human Senses group is to research and develop novel algorithms for modeling and pattern recognition in dynamic non-stationary environments. Our work encompasses all stages of using neuro-electric signals for augmentation including: data acquisition, sensor development, signal processing, modeling, pattern recognition, interface development, and experimentation.
Image left: Flight demonstration using EMG Bio-sleeve.
Signal processing environment
– EHS has developed a distributed data flow based Signal Processing Environment for Algorithm Development (SPEAD) which is used for all of our studies and is available to our partners. This environment allows for someone to program sophisticated machine learning algorithms by wiring blocks together. These blocks run in parallel on standard PCs and Macs and allow for distributed machines to be used.
– The biggest challenge to using EMG and EEG signals is to reliably acquire them over long durations. Standard medical technology is not adequate for this purpose. Currently we are working with industrial partners in the development of non-contact electrodes which can be sewn into clothing to detect EMG and EEG.
Modeling and pattern recognition
– The majority of our efforts are focused upon developing novel algorithms for detecting and recognizing patterns in both EMG and EEG signals. We have successfully developed many different techniques including Hidden Markov modeling, neural network based recognition, Bayesian approaches to signal modeling, and information theoretic approaches to cause and effect analyses such as Transfer Entropy.
– Our laboratory facilities include state-of-the-art equipment for acquiring simultaneously up to 128 channels of EEG, 48 channels of very high speed EMG, and fully immersive virtual simulation environments including a curved 30 foot projection system.
– EHS is focusing upon enabling the following capabilities:
- suit-integrated tele-operation devices
- silent communication
- automated interface adaptation via state assessment
- virtual cockpit/command consoles
- tele-operation in the presence of delays
The Extension of the Human Senses group (EHS) focuses on developing alternative human-machine interfaces by replacing traditional interfaces (keyboards, mice, joysticks, microphones) with bio-electric control and augmentation technologies.
Our developments originated with the development of a sleeve which senses Electromyograms (EMG) associated with muscle contractions in the forearm. These signals are then translated so that a user of our technology can mimic the movements of a joystick and these gestures are translated into actual joystick commands to the computer while not requiring any physical devices. This work was extended to allow for a user to mimic finger movements associated with typing that are then translated into keystrokes.
Image right: Dr. Charles Jorgensen using subvocal speech to navigate Mars terrain.
Another unique capability currently under development is a silent speech interface. This work is led by Dr. Charles Jorgensen. With this technology EMG electrodes are placed upon the throat to measure throat and tongue muscle movements. These movements can then be translated into a limited vocabulary. This capability allows for silent communication and speech augmentation in extremely noisy environments such as the space station.
A third area of research consists of developing brain computer interfaces led by Dr. Leonard Trejo. This work is focusing on sensing potentials on the surface of the scalp known as Electroencephalograms (EEG). These signals are then processed with recognized thought based control commands. We have also shown that it is possible to do real-time state assessment to adapt user interfaces when subjects become fatigued.