A Local Appearance Model for Volumetric Capture of Diverse Hairstyles
Ziyan Wang1,2
Giljoo Nam2
Aljaz Bozic2
Chen Cao2
Jason Saragih2
Michael Zollhöfer2
Jessica Hodgins1
1Carnegie Mellon University
2Reality Labs Research
3DV 2024 (Oral Presentation)
We present a novel local appearance model that is capable of capturing the photorealistic appearance of diverse hairstyles in a volumetric way.

Abstract

Hair plays a significant role in personal identity and appearance, making it an essential component of high-quality, photorealistic avatars. Existing approaches either focus on modeling the facial region only or rely on personalized models, limiting their generalizability and scalability. In this paper, we present a novel method for creating high-fidelity avatars with diverse hairstyles. Our method leverages the local similarity across different hairstyles and learns a universal hair appearance prior from multi-view captures of hundreds of people. This prior model takes 3D-aligned features as input and generates dense radiance fields conditioned on a sparse point cloud with color. As our model splits different hairstyles into local primitives and build prior at that level, it is capable of handling various hair topologies. Through experiments, we demonstrate that our model captures a diverse range of hairstyles and generalizes well to challenging new hairstyles. Empirical results show that our method improves the state-of-the-art approaches in capturing and generating photorealistic, personalized avatars with complete hair.



Paper

Z. Wang, et al.
A Local Appearance Model for Volumetric Capture of Diverse Hairstyles


[Paper]
[Bibtex]
[Videos]

Video

We show free view rendering of our model for different identities with diverse hairstyles. The input point clouds for each identities is shown on the right as a reference.

Our model generalize reasonably to novel identities and create photorealistic appearances for new hairstyles.

Ours

instant-NGP

Ours

instant-NGP

We show avatar reconstruction results on iphone captures where we use sparse view as input and finetuning our model and instant-ngp on the sparse view inputs. We could see that our model provides both a strong prior and regulations on shape and appearance.


Bibtex

@proceedings{ziyan_3dv_2024, editor={Ziyan Wang and Giljoo Nam and Aljaz Bozic and Chen Cao and Jason Saragih and Michael Zollheofer and Jessica Hodgins}, title={International Conference on 3D Vision, 3DV 2024, Davos, Switzerland, March 18-21, 2024}, publisher={{IEEE}}, year={2024}, }