NVIDIA GeForce GTX 11 GPUs Might Leverage AI Engine For Real-Time HairWorks Modeling

HairWorks
NVIDIA is on the cusp of releasing a new round of graphics cards, and there have been a lot of rumors and speculation as to what the company's next generation GPUs—codenamed Turing—will bring to the table. One possible feature we might see is the leveraging of artificial intelligence to render more realistic-looking hair in real-time.

This already exists to some extent with NVIDIA's HairWorks technology, which is the culmination of over 8 years of research and development. Enabling HairWorks in supported games certainly highlights those strands of hair, but it can also introduce a stiff performance penalty. There's also room for improvement. That could come from a deep learning-based method that is currently be collaboratively worked on by researchers from the University of Southern California, Pinscreen, and Microsoft.

"Realistic hair modeling is one of the most difficult tasks when digitizing virtual humans," the researchers said. "In contrast to objects that are easily parameterizable, like the human face, hair spans a wide range of shape variations and can be highly complex due to its volumetric structure and level of deformability in each strand."



Part of what's interesting about this is that NVIDIA posted about the research on its developers blog. Might NVIDIA be planning to implement some form of AI-based hair rendering in its GeForce GTX 1100 series? We don't know yet, but for what it's worth, the researchers working on this technology used Titan Xp GPUs with the cuDNN-accelerated PyTorch deep learning framework to train their convolutional neural network.

The network consisted of a dataset comprised of over 40,000 different hairstyles and 160,000 corresponding 2D orientation images rendered from random views. There three steps to the neural network pipeline, including pre-processing, hair strand generation, and reconstruction.

"A preprocessing step is first adopted to calculate the 2D orientation field of the hair region based on the automatically estimated hair mask. Then, HairNet takes the 2D orientation fields as input and generates hair strands represented as sequences of 3D points. A reconstruction step is finally performed to efficiently generate a smooth and dense hair model," the researchers said.

Hair Reconstruction
Click to Enlarge (Source: Arxiv.org [PDF])

According to the researchers, the neural network delivers more details and better looking results than other similar systems that attempt to generate 3D hair. It can also handle a variety of hairstyles, such as wavy, straight, very curly, and so forth.

There is still more work to be done. At present, the researchers' method struggles with "exotic hairstyles," such as kinky hairdos, afros, and buzz cuts. The main reason is that those hairstyles have not yet been inputted to the system's database.

It's an interesting development though, especially since NVIDIA is so heavily invested in AI. We'll have to wait and see if NVIDIA adopts this approach, and/or if the GeForce GTX 1100 series comes armed with Tensor cores that could help with this sort of thing.