Unconstrained Realtime Facial Performance Capture

UNCONSTRAINED REALTIME FACIAL PERFORMANCE CAPTURE

Pei-Lun Hsieh, Chongyang Ma, Jihun Yu, Hao Li

Proceedings of the 28th IEEE International Conference on Computer Vision and Pattern Recognition 2015, 06/2015 – CVPR 2015

[paper]   [video]   [poster]   [bibtex]

Copyright © 2014 Hao Li

We introduce a realtime facial tracking system specifically designed for performance capture in unconstrained settings using a consumer-level RGB-D sensor. Our framework provides uninterrupted 3D facial tracking, even in the presence of extreme occlusions such as those caused by hair, hand-to-face gestures, and wearable accessories. Anyone’s face can be instantly tracked and the users can be switched without an extra calibration step. During tracking, we explicitly segment face regions from any occluding parts by detecting outliers in the shape and appearance input using an exponentially
smoothed and user-adaptive tracking model as prior. Our face segmentation combines depth and RGB input data and is also robust against illumination changes. To enable continuous and reliable facial feature tracking in the color channels, we synthesize plausible face textures in the occluded regions. Our tracking model is personalized on-the-fly by progressively refining the user’s identity, expressions, and texture with reliable samples and temporal filtering. We demonstrate robust and high-fidelity facial tracking on a wide range of subjects with highly incomplete and largely occluded data. Our system works in everyday environments and is fully unobtrusive to the user, impacting consumer AR applications and surveillance.
 

PAPER VIDEO