New UI from Microsoft, Forget about GUI and NUI
The mouse and keyboard aren’t going anywhere anytime soon, but this is not stopping human computer interaction researchers to push the boundaries of UI and deliver new concepts. While the graphical user interface made the keyboard and mouse the core of a worldwide ubiquitous interaction paradigm, the GUI as we know it represents the past. With Windows 7, Microsoft is ushering in a new era, in which the Natural User Interface changes the way users interact with their machines, through touch, gestures and speech. However, Microsoft itself is exploring additional UI concepts, such as muscle-computer interfaces (muCIs).
According to a patent filing unearth by TechFlash, the Redmond company has asked to be awarded patents for a new method of computer interaction which involves Electromyography (EMG). What Microsoft is proposing is a system for recognizing gestures from forearm EMG signals, essentially using electrical activity from muscles in the hands to deliver instructions for the computer.
The patent filling reveals that Microsoft has been researching a new “approach [designed] to infer user input from sensed human muscle activity. Advances in muscular sensing and processing technologies make it possible for humans to interface with computers directly with muscle activity. One sensing technology, electromyography (EMG), measures electrical potentials generated by the activity of muscle cells. EMG-based systems may use sensors that are carefully placed according to detailed knowledge of the human physiology. Specific muscle activity is measured and used to infer movements, intended or not. While this has been done with meticulously placed EMG sensors under artificial test conditions and finely tuned signal processing, to date, gestures have not been decoded from forearm EMG signals in a way that would allow everyday use.”
At the bottom of this article, you will be able to see a video containing demonstration of muCIs in action (courtesy of BeingManan). The new UI methodology is capable of sensing and decoding human muscular activity and translating the result into computer input. The technology is capable of associating pressure exerted by fingers with certain commands, but also has finger-specific functions and connect functions with certain postures.
“A machine learning model is trained by instructing a user to perform proscribed gestures, sampling signals from EMG sensors arranged arbitrarily on the user's forearm with respect to locations of muscles in the forearm, extracting feature samples from the sampled signals, labeling the feature samples according to the corresponding gestures instructed to be performed, and training the machine learning model with the labeled feature samples. Subsequently, gestures may be recognized using the trained machine learning model by sampling signals from the EMG sensors, extracting from the signals unlabeled feature samples of a same type as those extracted during the training, passing the unlabeled feature samples to the machine learning model, and outputting from the machine learning model indicia of a gesture classified by the machine learning model,” it is added in the Microsoft patent filing.
According to a patent filing unearth by TechFlash, the Redmond company has asked to be awarded patents for a new method of computer interaction which involves Electromyography (EMG). What Microsoft is proposing is a system for recognizing gestures from forearm EMG signals, essentially using electrical activity from muscles in the hands to deliver instructions for the computer.
The patent filling reveals that Microsoft has been researching a new “approach [designed] to infer user input from sensed human muscle activity. Advances in muscular sensing and processing technologies make it possible for humans to interface with computers directly with muscle activity. One sensing technology, electromyography (EMG), measures electrical potentials generated by the activity of muscle cells. EMG-based systems may use sensors that are carefully placed according to detailed knowledge of the human physiology. Specific muscle activity is measured and used to infer movements, intended or not. While this has been done with meticulously placed EMG sensors under artificial test conditions and finely tuned signal processing, to date, gestures have not been decoded from forearm EMG signals in a way that would allow everyday use.”
At the bottom of this article, you will be able to see a video containing demonstration of muCIs in action (courtesy of BeingManan). The new UI methodology is capable of sensing and decoding human muscular activity and translating the result into computer input. The technology is capable of associating pressure exerted by fingers with certain commands, but also has finger-specific functions and connect functions with certain postures.
“A machine learning model is trained by instructing a user to perform proscribed gestures, sampling signals from EMG sensors arranged arbitrarily on the user's forearm with respect to locations of muscles in the forearm, extracting feature samples from the sampled signals, labeling the feature samples according to the corresponding gestures instructed to be performed, and training the machine learning model with the labeled feature samples. Subsequently, gestures may be recognized using the trained machine learning model by sampling signals from the EMG sensors, extracting from the signals unlabeled feature samples of a same type as those extracted during the training, passing the unlabeled feature samples to the machine learning model, and outputting from the machine learning model indicia of a gesture classified by the machine learning model,” it is added in the Microsoft patent filing.
Share
0 comments:
Post a Comment