Search Blog


Choose a category

Iain Anderson

Sensor fusion for human motion capture


Standing up and walking requires closely controlled muscle activity.  Without the coordination of our optical sensors (eyes), acceleration sensors (balance organs in our ears), and stretch sensors embedded in muscles and tendons, we run the risk of tripping and falling.



Sensory data is processed by local nerve bundles (aka central pattern generators) and our central nervous system. We can stand with our eyes closed if the balance organs in our ears and the stretch sensors in our muscles and tendons are all functioning. With practice, we can learn to walk with a damaged balance system in our ears but with our eyes open. When all of our sensors: optical, acceleration and stretch, are working well we can dance and perhaps pirouette too. This demonstrates what engineers refer to as sensor fusion. Key aspects of sensor fusion include the use of sensor types that are complementary to each other and the efficient processing of all of the data for improved situational awareness, reliability, accuracy, and performance. This is an area of great interest for the development of gaming, motion tracking, virtual reality and augmented reality systems.



If we were to characterize the movement of an entire ‘dancing’ human body we could use a standard approach: dressing our subject in a suit covered in many fiducial markers and getting them to move about in a space that is surrounded by infrared cameras that triangulate marker position in 3D space. The use of many cameras, surrounding the individual, overcomes occlusion, when a marker’s position is obscured by a limb or the body, and also improves accuracy. But this is expensive (~ $1000/camera), confined to a fixed space and computationally expensive too. As an alternative one could apply sensor fusion: using multiple sensors of more than one type.

The goal is to improve motion capture reliability, provide better situational awareness for the user or improve the performance of the avatar.

For this, we might consider the use of accelerometers for gravity with sensors that measure strain directly like the ones that we manufacture at StretchSense.

An application of the use of a wearable with complimentary sensor types was demonstrated several years ago by workers in the Biomimetics Lab of the Auckland Bioengineering Institute:  three capacitive StretchSense stretch sensors were combined with a gravity-sensitive accelerometer in a glove that controlled an early version of the Doom game.


Bending the 1st  finger fired a weapon; the 2nd finger opened a door and the thumb was used to change weapons. Forward, backward, left or right movement of the player through the labyrinth could be signaled via accelerometer response to the tipping of the glove in yaw and roll. Thus with only 4 sensors, the player was able to guide and control the avatar.

Sensor fusion has also been used in the world of virtual youtubing or “V-Tubing.” In the demo below, our StretchSense Smart Glove is being used in conjunction with other devices to capture a user’s motion and control an online avatar.




The use of stretch sensors with inertial measurement units (e.g. accelerometers, gyros, and GPS) could potentially reduce the need for cameras (perhaps even eliminate them). This would help to mitigate the problem of occlusion, and a rich multi-sensor dataset could potentially improve the accuracy of the overall measurement. The use of more than one sensor type and multiple sensors would also improve measurement reliability. This is perhaps easier said than done for it would require the development of a means for processing the multi-sensor dataset. But all of this is possible with purpose-built electronics and good software.

StretchSense can help you to achieve sensor fusion. Our multi-channel electronics can be integrated with other systems and we offer Bluetooth and wired approaches for signal transfer. Our capacitive sensors are integrated with fabric and easy to sew into a garment for motion capture. Please give us a call, we’d like to help you to record your pirouette or control your avatar.

Previous Article

StretchSense Smart Garment Preliminary Washability Findings

Read more

Next Article

Real-time stretch sensor data processing with Amazon Web Services Kinesis

Read more

Hear it from us first!

Sign up to be one of the first to hear the latest news and product releases from StretchSense.


Latest Press Release: StretchSense Acquires MocapNow

Download Packaged File (PDF 4.8MB)