Video Results Navigation Page

The following webpage contains all video results that come with this submission. Please note, the results are best viewed in fullscreen resolution.

Animation on in-the-wild Phone Capture

These are results of hair animation on single-view phone captured sequences with depth. Here we take a nodding and a swinging sequence as examples. We first perform keypoint extraction and head mesh tracking on the single-view phone captured video, which are shown in the first two columns. To achieve smooth in-the-wild face tracking and resolve scale ambiguity, we use depth as an additional supervision for the head mesh tracking. Then, with the head motion information, we propagate the static hair into future configurations, which is shown in the last column.

Nodding

We show animation results of four different hair styles animated using the same nodding motion.

Swinging

We show animation results of four different hair styles animated using the same swinging motion.

Static

Other Identities

We can animate the same hair models for different people. The first row is for subject 2 and the second row is for subject 3; the first column with hair style 1 and the second column for hair style 2.

Novel View Synthesis

We show video comparisons on novel view synthesis. From left to right, we have rendering results from MVP, HVH and our method as well as ground truth video on the last column.

Animation on Lightstage Capture

We show animation results on multiview capture from a lightstage. From left to right, the three columns represent the bald head capture from a side camera, animation overlaying hair from the frontal camera, animation overlaying hair from the side camera.

Range of Motion

We show animation results of our dynamic model on a range of motions. The hair animation is generated by evolving the initial hair state conditioned on head motion and gravity direction.

Our method is also robust to different hair styles with diverse motion patterns.

Effect of Point Flow Supervision

We show the comparison between models w/ (LEFT) and w/o (RIGHT) ptsflow supervision below. We see that the model w/o ptsflow supervision shows small jittering with inconsistent volumetric texture while the model w/ ptsflow does not.

Comparison of Different Initialization

Comparisons between models initialized with different hair styles are shown below. In the first row, we show results from the model trained on the short hair style data. In the second row, we show results from the model trained on the pigtails. The animations in the first column are initialized with the short hair and the second column are initialized from the pigtails. Therefore, the results in the green boxes are using the correct initialization while results in orange boxes are using mismatched initialization. As we can see, our model can correct the initialization to some extent and perform correct temporal propagation afterwards.

Encprop v.s. PtsProp

We compare the encprop and ptsprop below. We show the value of each entry in an encoding using a bar chart on the left. The corresponding rendering results are shown on the right. As we can see encprop is more prone to noise and shows more drifting due to a lack of a noise cancellation module like the point encoder we used in ptsprop

Point Cloud Encoder on Denoising Encoding Space

Here we do a more detailed ablation on how the point cloud encoder denoises the encoding vector. We first start with a fixed latent code of hair colored in red. We than apply different level of noise to that encoding and create a noisy version of it colored in cyan. Finally, we use the point encoder-decoder to denoise it and create a corresponding denoised version in color of blue . In the below videos, we visualize part of each encoding using a bar chart on the left. Each frame shows a code under a different noise sample. On the right, we show the corresponding rendering of the denoised code(blue one). As you can see, our encoder preserves the encoding under a moderate noise level.However, we find the denoising will fail when the noise level is too large and we only apply the remapping for once. We show that if we apply the remapping for multiple times, the result is obviously less noisy.

Noise Magnitude Multiplied by 0.3

Noise Magnitude Multiplied by 1

Noise Magnitude Multiplied by 3

Noise Magnitude Multiplied by 3, remapping for 5 times