The human hand can perform over a thousand distinct movements: gripping, twisting, pinching, pushing and many more. Gathering information about these movements, however, in order to develop better treatments for those with disabilities, to develop virtual reality tools that model the hand, or to simply to learn more about the body, is difficult.
Now, researchers at the MIT Zhao Lab have developed an ultrasound wristband that can continuously track the muscles and tendons beneath the skin, imaging joints and recording the architecture of the bones in real time. Alongside a type of artificial intelligence designed to decode these signals, the researchers hope to change how we gather information from inside the body.
“Ultrasound is a very good way to communicate with the body because it can penetrate deep tissues and organs,” said Xuanhe Zhao, a professor of mechanical engineering, who led the new work, published in Nature Electronics.
Ultrasound is an improvement on the current methods in many ways. Gathering movement information typically involves using cameras, which lack the ability to precisely detect and capture every subtle movement of the hand joints. Another common alternative is EMG, or electromyography, in which tiny needles placed within the body pick up and transmit electrical signals from the muscles. While EMG can be used both for general tracking, and in prosthetics, it is hard to read: for example, it can’t always tell the degree to which you pinch: light and firm touch will both come out similar in the data.
More motion detection
In contrast, ultrasound is non-invasive, which is particularly important for patient care, where avoiding any further harm is of high importance while maintaining a high degree of specificity to understand recovery, according to Zhao. Ultrasound can also detect joint movement to 22 degrees of freedom — meaning it can detect movement in 22 of the 27 directions in which a human hand can move.
“The technology has significant potential, I would say primarily for the control of upper limb prosthesis,” said Paolo Bonato, the director of the Motion Analysis Lab at Spaulding Rehabilitation Hospital, who was not involved in the research. Since the movement of lower limbs is typically simpler, Bonato believes this technology would be more valuable in the upper limbs. “There is nothing like typing that you do with the lower limbs.”
To train the artificial intelligence that decodes the data, the team used a motion capture system with physical markers placed on a hand. This allowed the scientists to observe movements and convert them into precise joint angle data.
There is a myriad of potential for the applications of this device, from virtual reality to a variety of medical applications, including prosthetics, Zhao said. “We could really understand how muscles and tendons move during recovery, for example, after stroke.”
Clinicians could see inside a patient’s body and track exactly how their muscles and joints are moving during recovery.
Dian Li, a mechanical engineering PhD student in Zhao’s lab, is working on adapting the underlying technology for devices that can be worn on any other part of the body. To him, if the model can be made to work on the hand—the most dexterous part of the human body — then mapping movement in a knee, an elbow or a shoulder should be comparatively straightforward.
More hands on deck
There are some limitations to the current model, however. Their dataset is currently based on 10 hands, and is not large enough to generalize and be applicable to any hand. Since everyone’s hand sizes are different, the wristband will receive a differently sized image each time, and the AI model that interprets the data cannot yet adapt to that. The team continues to train the model — aiming for around 100 hands total — to adapt it to any future patient’s or user’s hand.
Bonato, whose work focuses on wearable sensors for motor rehabilitation, also mentioned that for the technology to be applied to patients, future work would need to look at how well the wristband can also detect hand function. For example, measures of force are vital for controlling prosthetics. If the device can only detect movement, but not how forceful the movement is, the prosthetics user could do something dangerous, such as accidentally squeezing a plastic cup of hot coffee.
Overall, Zhao and his lab hope to continue developing devices at the interface of humans and machines. “The mission of my lab is really merging human with machine and AI,” Zhao said. “We believe there’s a huge opportunity on this interface.”
Zoe Beketova is a student in MIT’s Science Writing Program.
This piece originally appeared on Scope, a publication of the Graduate Program in Science Writing at MIT.


