HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair Performance Capture
Ziyan Wang1,3
Giljoo Nam3
Tuur Stuyck3
Stephen Lombardi3
Michael Zollhöfer3
Jessica Hodgins1,2
Christoph Lassner3
1Carnegie Mellon University
2Meta AI
3Reality Labs Research
CVPR 2022
HVH turns sparse driving signals like tracked head vertices and guide hair strands(left most column) into volumetric primitives(middle column) which could enable free view rendering and animation(right most column)

Abstract

Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, complex physical interaction and the non-trivial visual appearance that must be captured. Yet, it is a critical component to create believable avatars. In this paper, we address the aforementioned problems: 1) we use a novel, volumetric hair representation that is composed of thousands of primitives. Each primitive can be rendered efficiently, yet realistically, by building on the latest advances in neural rendering. 2) To have a reliable control signal, we present a novel way of tracking hair on strand level. To keep the computational effort manageable, we use guide hairs and classic techniques to expand those into a dense head of hair. 3) To better enforce temporal consistency and generalization ability of our model, we further optimize the 3D scene flow of our representation with multiview optical flow, using volumetric raymarching. Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals. We compare our method with existing work on viewpoint synthesis and drivable animation and achieve state-of-the-art results.



Paper

Z. Wang, et al.
HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair Performance Capture


[Arxiv]
[Bibtex]

Bibtex

@misc{wang2021hvh, title={HVH: Learning a Hybrid Neural Volumetric Representation for Dynamic Hair Performance Capture}, author={Ziyan Wang and Giljoo Nam and Tuur Stuyck and Stephen Lombardi and Michael Zollhoefer and Jessica Hodgins and Christoph Lassner}, year={2021}, eprint={2112.06904}, archivePrefix={arXiv}, primaryClass={cs.CV} }

Video Results

Free View Rendering

Our method enables free view rendering of dynamic upper head with different hair styles and hair motions.

Comparison to Baseline Approaches

We compare our method with several baseline methods like non-rigid NeRF, NSFF, MVP and a vanilla perframe NeRF model.


Tracking Visualization

Guide hair strands overlay on two different views.


Hair Editting and Animation

Given the tangible interface of hair strands, we could generate new hair motion animations by directly editing on the input signal as guide hair strands.