It’s the middle of 2020 and many of us are locked down in an effort to stop the spread of Covid-19.
We figured this is the right time to deliver our first webinar: Xsens StretchSense finger and body mocap in MotionBuilder.
To make it all work, people were tasked with performing separate production duties in different locations.
The distributed team had to use technology that didn’t exist a few years ago in order to make live performance capture possible.
We had CJ Markham operating MotionBuilder from his garage in Seattle, Katie Jo Turk operating the Xsens suit from her living room in Los Angeles, and Todd Gisby using the MoCap Pro glove in New Zealand. This demo was performed simultaneously from distant locations.
The Covid-19 lockdowns have taught us to reevaluate how to utilize tech to keep production going and ensure that those of us in the motion capture industry can keep our jobs.
“The show must go on” as they say.
Katie Jo was able to stream live data from LA to CJ in Seattle from her Xsens link suit with the help of Xsens MVN Animate software and a VPN.
The data could then be assembled in MotionBuilder along with MoCap Pro glove data from Todd in New Zealand. Somehow it all worked!
Curating key poses before the shoot
A key focus in the webinar was the utility of Pose Detection Mode. With this functionality, the senior animator or lead animation director can curate the poses they want to see in the final animation before the capture session. When the performer approximates their hand position to one of the pre-defined poses, Pose Detection Mode will transition the character’s fingers to that pose.
This eliminates the work often needed in post to clean up the animation. This is powerful for animators because it could be a huge time and money saver.
Blend Mode allows for more of nuanced and novel gestural animation. This is the performance that people look for in live applications, cinematics, and pre-visualization.
Full-body performance with no need for recalibration
One of the key points Katie Jo and the StretchSense team emphasized in the webinar was the fact that a vigorous performance shoot didn’t disturb the calibration of either the Xsens suit or the MoCap Pro gloves.
To illustrate this, Katie Jo performed the UCLA Bruin 8 Clap, which all true fans of the university know instinctively (we’re told). In fact, the only thing that couldn’t stand up to the 8 clap test was the Zoom app.
Maintaining calibration is key on set when often there simply isn’t time to keep going back and fixing calibration issues.
StretchSense gloves can achieve consistent calibration because they are designed to fit snugly and comfortably on the hands while the stretch sensors are robust and do not drift.
Where pose detection really shines is in prop interaction. You can capture specific poses for props so those hand poses stay solid when the performer is handling those props. Again, this reduces the amount of post-production work for animators.
Animators are sometimes given optical finger data that isn’t captured quite well enough to use. It presents a headache to the person having to clean it up. Reduced post-work due to pose detection is great for things like in-game animation and good for things like you know dynamic and cinematic work.
Another application where pose detection is an advantage is live VR/AR where fidelity is important to the final product.
Hand position tracking
MoCap Pro gloves can be purchased with an optional IMU (inertial measurement unit) to track wrist rotational data for the wrist. VIVE trackers can also be used to track the position and rotation of the hand.
The glove comes with a velcro strip on the back of the glove so the inertial hand tracker on the Xsens body suit can be attached easily.
The webinar is packed with information both specific to the MoCap Pro glove and Xsens, however, it also contains a wealth of advanced mocap tips and power moves that are too extensive to go into here. Watch the webinar to learn more.