This article is available in: French
Despite massive technical improvements, hair and fur simulation remain a challenge for animation studios. Taking into account gravity, wind, interactions is a compute-intensive task, which can’t be accurately done in real time.
Which is why grooms are typically created without any physics. The artist will manipulate curves to create the required hairstyle. The downside is that the dynamical behavior can sometimes be surprising: unplanned sagging effects are not uncommon.
NVIDIA speeds up hair simulations thanks to AI
Gilles Daviet (NVIDIA France) unveils a novel approach to compute hair simulation on the GPU thanks to AI in a publication titled Interactive Hair Simulation on the GPU using ADMM. This paper will be showcased at SIGGRAPH 2023. The idea is to use AI (neural physics, to be precise) in order to predict the way hair would behave in the real world.
This technique provides greater performance, and the simulation can even be computed at interactive framerates depending on the complexity of the haircut. Here is an example of what can be achieved:
This showcase video is quite impressive in terms of performance.
Gilles Daviet explains that he did tests using various scenes. The hair simulation took between 0.18s and close to 8 seconds per frame. In a nutshell, the average computation time will increase depending on various factors such as the amount and length of hair, or how precisely collisions are handled. As for the memory, the simulation required from 1GB up to around 2×9.5GB (on a dual GPU setup) depending on the scene.
This faster hair simulation technique can be used in a variety of ways.
For example, explains Gilles Daviet, a physics-based editing tool can be used to tweak an existing groom, while respecting elasticity and self-collision constraints. The demonstration tool he created allows “uniformly scaling the hair length and/or curvature; trimming the rods along a cutting plane; and direct manipulation of strands within a selection radius through spring-like forces.” The paper will give you more information about optimizations that can be used to improve performance in such a case. He managed to edit in real-time a groom with around 86,000 rendered curves by only simulating 10% of the strands, the rest being deformed according to the guides motion.
The technique detailed in this paper could also be used in a procedural workflow, to create variations from a single base groom. The simulator would apply various actions (growing/trimming hair, apply external forces, and so on) and generate a huge amount of variants. As Gilles Diavet explains, “most generated variants not may not be of interest” but “the fast iteration time of the overall process allows to quickly identify and focus over the ones that are.”
This could be a quick and efficient way to create a database of grooms, ready to be used on a crowd, for example.
This paper should be seen as the first step towards future interactive grooming workflows. Gilles Diavet highlights that his work does have limtations. For example, collision detection is not perfect (which can create clumps), and the effect of hair products such as hairspray is not taken into account: this kind of detail is required if you wish to reproduce real-world haircuts. “Interactions with the surrounding air, or two-way coupling with soft bodies” are also not taken into account by the solver.
Last, but not least, real-time physics-based grooming tools will also require to solve various UX challenges. Gilles Daviet explains that haptic feedback, smart selection tools could be a good way to “make the experience closer to that of a real-life barbershop”.
In other words, we can expect further research in this area in the coming years.