Why face capture is important and tricky
A quick guide to facial capture and the technology OnPoint Studios uses to be a market leader in this field.
By PETE CARVILL, OnPoint Studios
Often, when we talk about motion capture, our focus is mainly on the whole-body experience. Most people, when this comes up, think of people in black suits, covered with ping pong balls, leaping about our motion capture studio in Berlin. And if you are a nerd, which most people in the industry are and wear as a badge of pride, you will be aware of how performance capture has been used in the production of the Marvel Cinematic Universe, or Game of Thrones, or any of the other myriad science-fiction and action franchises.
There is certainly an element of truth in that but, in reality, there is much more fine detail in our work. That comes out with face capture, otherwise known as facial mocap. In the days of The Lawnmower Man, nearly thirty years ago, the faces of each 3D character were extremely basic designs that did not, or could not, convey the complexity of facial expression of the film’s actors (Pierce Brosnan and Jeff Fahey).
In 1992, The Lawnmower Man’s effects, were considered revolutionary. But times have changed. Producing such facial animation today would be unheard of, simply because by modern standards they are very, very bad. Even the lowest-budget film or TV series would balk at such a slapdash job (although the animation of The Rock in The Scorpion King is the exception that proves this rule).
But before we talk about facial motion capture, we need to know what it is. A good definition can be found in the 2004 academic paper Use and Re-use of Facial Motion Capture Data: “Computer facial animation concerns the realistic animation of human facial expressions, whether those expressions be the common emotional responses (happiness, fear, disgust etc.) or the movement of the lips and jaw during speech.”
Difficulties in facial capture
There are a number of difficulties within facial capture. According to cited paper’s authors, these are, “[lie] in the application of a discrete sampling of surface points to animate a fine discontinuous mesh. Furthermore, in the general case, where the morphology of the actor’s face does not coincide with that of the model we wish to animate, some form of retargeting must be applied.”
The authors continued: “The difficulty of the task is compounded by the expert nature of the audience. Viewers can often spot computer-generated motions which can appear stilted and unnatural in comparison with their experience in everyday life.”
The two pipelines for face tracking at OnPoint Studios
Using Faceware for the face capture during a full body motion capture shoot
OnPoint Studios has two pipelines by which it performs facial capture — real-time performance and animated. Currently, most of the studio’s work is in the former. “Live facial capture has been around for a while but not often used in productions,” says face capture supervisor Kevin Clare. “Until recently, we’ve been using Faceware Live, which we use to process the data from a head-mounted camera, sending the live face performance to a computer, before streaming it into Unreal. However, we feel that we’ve pushed Faceware Live to its limit and, in order to improve our live capture further, we are switching to Apple’s ARkit.”
Face capture with ARkit for an upcoming super secret project
With ARkit, the team are going to be able to mount an iPhone to a performer’s chest or head with the resulting footage streamed directly into Unreal. This new software will allow them to record depth information with the camera that is then transformed into a manipulatable format. All this will be achieved through a plugin.
For the pre-recorded pipeline, OnPoint Studios intends to stick with Faceware but will upgrade to the new Faceware Studio software that will allow the team to quickly analyse and retarget a large amount of facial performance data. The animators can then focus on polishing and editing the performance to the client’s requirements.
The next big thing at OnPoint Studios
VR streaming including face capture for NeXR Seminar
However, live facial capture remains the majority of the work done by OnPoint Studios. In in the near future, the team will be taking part in NeXR Seminar, its virtual seminar on networking. The seminar will go live on Thursday, 17th. Its concept is to have a live presenter — Alexander Sascha Wolf — in front of manipulated background and effects, interacting with animated characters about the value of networking. The entire thing will be broadcast through virtual reality through OnPoint’s sister department VRiday.
Kevin has been working in facial animation for four years, with roughly double that in regular animation, and has been with OnPoint since April 2018. He says that what sets the studio apart is its ongoing focus on live performances. “In comparison to other studios,” he says, “our main focus is on live so everything in that area is based around getting better performances.”
The team have also been working with partners to improve their capabilities (in fact, I wrote about this an earlier post). “We have the virtual seminar being released soon,” says Kevin, “in which we are going to track the presenter’s facial performance in real time. Seminar participants won’t just see the body but also face and fingers, and this is all being done live. It’ll be a full performance capture — audio, video, animation — done with the presenter.”
Kevin says that the virtual seminar is pretty cool, and a neat collaboration between all aspects of the company. He is looking forwards to seeing it all go live. You should be, too!