MetaHuman Animator is a new feature set that enables you to capture an actor’s performance using an iPhone or stereo head-mounted camera system (HMC) and apply it as high-fidelity facial animation on any MetaHuman character, without the need for manual intervention.
Every subtle expression, look, and emotion is accurately captured and faithfully replicated on your digital human. Even better, it’s simple and straightforward to achieve incredible results—anyone can do it.
If you're new to performance capture, MetaHuman Animator is a convenient way to bring facial animation to your MetaHumans based on real-world performances.
And if you already do performance capture, this new feature set will significantly improve your existing capture workflow, reduce time and effort, and give you more creative control. Just pair MetaHuman Animator with your existing vertical stereo head-mounted camera to achieve even greater visual fidelity.
Intrigued? Let’s take a deeper dive…
High-fidelity facial animation—the easy way
Previously, it would have taken a team of experts months to faithfully recreate every nuance of the actor’s performance on a digital character. Now, MetaHuman Animator does the hard work for you in a fraction of the time—and with far less effort.
The new feature set uses a 4D solver to combine video and depth data together with a MetaHuman representation of the performer. The animation is produced locally using GPU hardware, with the final animation available in minutes.
That all happens under the hood, though—for you, it’s pretty much a case of pointing the camera at the actor and pressing record. Once captured, MetaHuman Animator accurately reproduces the individuality and nuance of the actor’s performance onto any MetaHuman character.
Facial animation for any MetaHuman
The facial animation you capture using MetaHuman Animator can be applied to any MetaHuman character or any character adopting the new MetaHuman facial description standard in just a few clicks.
That means you can design your character the way you want, safe in the knowledge that the facial animation applied to it will work.
To get technical for a minute, that is achievable because Mesh to MetaHuman can now create a MetaHuman Identity from just three frames of video, along with depth data captured using your iPhone or reconstructed using data from your vertical stereo head-mounted camera.
This personalizes the solver to the actor, enabling MetaHuman Animator to produce animation that works on any MetaHuman character. It can even use the audio to produce convincing tongue animation.
Use an iPhone for capture
We want to take facial performance capture from something only experts with high-end capture systems can achieve, and turn it into something for all creators.
At its simplest, MetaHuman Animator can be used with just an iPhone (12 or above) and a desktop PC. That’s possible because we’ve updated the Live Link Face iOS app to capture raw video and depth data, which is then ingested directly from the device into Unreal Engine for processing.
You can also use MetaHuman Animator with your existing vertical stereo head-mounted camera system to achieve even greater fidelity.
Whether you’re using an iPhone or stereo HMC, MetaHuman Animator will improve the speed and ease of use of your capture workflow. This gives you the flexibility to choose the hardware best suited to the requirements of your shoot and the level of visual fidelity you are looking to hit.
The captured animation data supports timecode, so facial performance animation can easily be aligned with body motion capture and audio to deliver a full character performance.
Perfect for making creative choices on set
MetaHuman Animator is perfectly adapted for creative iteration on set because it enables you to process and transfer facial animation onto any MetaHuman character, fast.
Need an actor to give you more, dig into a different emotion, or simply explore a new direction? Have them do another take. You’ll be able to review the results in about the time it takes to make a cup of coffee.
With animation data reviewed right there in Unreal Engine while you’re on the shoot, the quality of the capture can be evaluated well in advance of the final character being animated.
And because reshoots can take place while the actor is still on stage, you can get the best take in the can there and then, instead of having to absorb the cost and time needed to bring everyone back at a later date.