Carnegie Mellon University’s Human Computer Interaction Institute has come up with a way to translate hand movements into commands for your smarwatch.
Most smartwatches today have tiny touchscreens, which aren’t always the easiest things to navigate. As a way to make browsing menus, answering calls and reading messages more intuitive, a team of researchers from Carnegie Mellon’s Human Computer Interaction Institute have developed a prototype gesture-sensing strap that can see inside a wearer’s arm and track the movements of their muscles. While it may still be a while before such a product is commercially available, Chris Harrison and Yang Zhang are well on their way to making it a reality.
The concept is based on electrical impedance tomography (EIT), a technique commonly found throughout medical and industrial settings. However, these devices are large, expensive and cumbersome to wear. What you will notice is that Harrison and Zhang’s unit, named Tomo, is exponentially smaller and less invasive, allowing it to be integrated into consumer electronics typically worn your wrist, like a smartwatch strap.
A simple EIT setup involves one emitter that sends out a high-frequency AC signal captured by a receiver. This data can be used to calculate the impedance between the electrodes and interpreted as desired. Multiplying and multiplexing the number of emitters and receivers can produce many path combinations and subsequently generate a two-dimensional map of an object — or in this case, the muscles inside a user’s wrist. With enough measurements gathered, an image of inside the arm can be mapped and analyzed in a way that’s quite similar to PET and CT scans.
To test their theory, the researchers built a prototype band with eight electrodes that each send a small electrical signal through the wearer’s arm, and then capture its strength coming out the other side. An Arduino Pro Mini (ATmega328) was interfaced with a bio-impedance sensing board, and transmitted the calculated impedance to a laptop over Bluetooth.
Although the images generated by Tomo are pretty low-res, they are still able to provide enough detail for a machine learning program to distinguish between a wide variety of hand and finger gestures being performed, such as swiping, pinching, giving a thumbs up, or our favorite, the Spider-Man.
As a proof-of-concept, Harrison and Zhang modded a Samsung Gear watch to demonstrate how Tomo can augment interactions with nothing more than hand movements. For example, envision being able to sift through a list of messages, and grasping to open one or stretching your fingers to close it. Or picture answering the phone by doing nothing more than clenching your fist and dismissing an incoming call with a flick of the hand. Pretty cool, right?
Intrigued? Head over to the project’s paper here, or see it in action below!