Imagine waking up tomorrow and having to navigate your day with no use of your hands. How limited would you feel not being able to get dressed, pick up your lunch, greet co-workers, or play sport?
It’s no wonder then that films and games, designed to bring realism to make-believe fantasy scenes, are working towards artfully capturing an actor’s body language, of which hand gestures are key.
The movie industry smashed a record $40 billion in global revenue in 2018, and at $100 billion, the gaming industry is even bigger than movies and music combined. Backed with mega-budgets, studios are stepping up to the challenge of awing audiences with bigger blockbuster hits year after year. Audiences are regularly seeing advanced CGI, adrenaline-inducing stunts, and sophisticated animation and have growing expectations of escapism from directors and game studios.
Delivering more immersive experiences hinges on the ability of studios to capture the subtleties of hand motion for richer content creation. Bringing computer-generated characters to life is an arduous task; a film crew will use cameras and body markers to capture an actor’s performance on a mocap stage and then apply them to digital characters in post-processing. What audiences might not appreciate though, is how limitations in camera technology are forcing animators to painstakingly hand-draw missing frames — all to deliver a few moments of perfection on the big screen.
The holy grail for studios is the ability to live stream hand, facial and full body motion data — from a large group of actors — with low latency. Because an actor’s mannerisms in their voice and movement all run together to form a character, it is highly advantageous to capture the full performance in one shot – this saves post-production time and costs and captures the authentic performance from the actors.
Image credit: GREE VR Studio Lab adding hand tracking to a full body suit for a Virtual YouTube showcase at SIGGRAPH Asia.
Capturing hand motion is even more critical in VR gaming than films where there is an inherently higher expectation of immersion. If you were shaking hands with someone in an interactive VR game and their hand suddenly slumped in yours like cold limbless rubber you would recoil*6. Research shows that unlike facial expression, with hands, the closer a virtual model is to a real one, the higher our liking of that is.
Image credit: Sean Firman, an up and coming indie game developer, created a VR game demo shooting fireballs from hands.
An innovative solution to this time consuming and expensive method is to use motion capture gloves. The recent developments from smart glove manufacturers show promise of serving the underserved hand departments of studios. Mocap gloves foremost need to be designed with actor comfort in mind – they need to be lightweight to not hinder an actor’s performance. The gloves also need to help technicians reduce production downtime by being quick to set up, provide accurate motion data and run for a full day’s shoot without the need to recharge. Using stretch sensors that deform with finger bend and spread is an attractive solution thanks to their thin form factor as well as their demonstrative stability after calibration. StretchSense believes that smart gloves are a seed to a whole new industry of XR in the next decade. The movie and gaming industries are paving the way for broader opportunities in developing gesture-based input peripherals for medical, military and workplace productivity applications.
References:
- 2018 Worldwide Box Office Hits Record as Disney Dominates
- Prosthetic Hands Trigger Uncanny Valley Sense of Creepiness
- Can Looking at a Hand Make Your Skin Crawl? Peering into the Uncanny Valley for Hands
- Valve’s VR Index Controllers Include 87 Sensors to Track Your Fingers and Hands
- Robots: Is the Uncanny Valley Real?