Roger - Dwarf Animation Studio - Unreal Engine
Accueil » Rédactions » Is Unreal Engine good enough to create animated shows for Netflix? Dwarf Animation tells us everything about their experiments!

Is Unreal Engine good enough to create animated shows for Netflix? Dwarf Animation tells us everything about their experiments!

This article is also available in: Français

Dwarf Animation, based in Montpellier, France, has launched a series of internal tests focused on Unreal Engine. Their initial goal was to determine if some animated series they worked on in the past could be produced today using Unreal.

Given that many studios are interested in using in this engine, we asked Dwarf to tell us more. The team was kind enough to shed light on these tests for us, to answer our questions, and to share with us an exclusive showcase!

Some questions we asked them delve deeply into the technical aspects, while others discuss the impact Unreal has on the production process and the way artists work. Feel free to browse the interview based on the topics that interest you. And don’t miss the end of the interview: Dwarf tell us about the impact of these tests on their production pipeline, and about an upcoming training program centered on Unreal Engine!

3DVF: Hello, Dwarf Animation! You recently initiated a series of internal tests around Unreal Engine. Your initial goal was to determine wether animated series the studio worked on could now be produced under Unreal.
What were your expectations, both in terms of results and technical challenges (considering only 2 people at Dwarf were trained in UE5)?

Dwarf Animation: Indeed, we first and foremost wanted to learn what level of quality we could achieve with Unreal Engine. We had seen beautiful projects made with this engine, but not exactly in the style of our usual productions. The short film Blue by Preymaker was the closest one in this regard. However, there were concerns about the quality of Subsurface Scattering (SSS), refraction, or the capabilities of Global Illumination (GI), which are often very approximative and limited in real-time.

We secretly hoped to reach the same quality as Pirata and Capitano or Monchhichi (which used a lot of SSS and GI to create a “rubbery” look). But no one thought we would get a result close to My Dad The Bounty Hunter [Editor’s note: the series is streaming on Netflix].

Showreel – My Dad the Bounty Hunter season 1

Most of us had never worked in the video game industry before. There were fears about the technical vocabulary, workflows, and data management. The most challenging technical aspect was forgetting what we knew and rethinking everything. Because approaching this kind of technology as we would in pre-computed makes the experience impossible or terribly painful. But sometimes it’s hard to let go of old habits!

3DVF: You wanted to create some animated sequences, relying on Perforce. Can you tell us more about the technical production constraints, and how much time you had to create these sequences? And for those unfamiliar with Perforce, what is it useful for?

For the first project (codenamed “Roger”), we wanted to understand how to organize files , from a technical point of view, for collaborative shot work in Unreal and to evaluate the engine’s rendering capabilities.

Frame from the first project, “Roger”. The full animation isn’t available online.

We set ourselves several constraints. Produce a 30-second short film under production conditions, with a small team, in 4 weeks, without prior training (at the time, only two of us had some experience with Unreal).
To make the task slightly more challenging, we decided to rely exclusively on Lumen for lighting (so no pathtracer) and use an ACEScg colorimetric pipeline. We forbade ourselves from using Nuke to assess the possibilities of achieving postFX such as depth of field, dust particles, volumetrics, or even color corrections.
From an animation perspective, we decided to animate within Unreal. Although the tools are not yet production-ready, we wanted to evaluate their potential.
Last, but not least, we used Quadro RTX 4000 GPUs with 8GB of VRam, which are a bit old.

Unreal file management was done using the Perforce integration. We had some concerns about its use by our artists because even if Perforce is widespread in the video game industry, it is a very different concept from our current pipeline.
Perforce creates a copy of the project on the artist’s machine (so they work locally). The artist can then decide to take edit rights on a file to modify it. When they’ve finished their work, they can submit it to the server so other artists can retrieve their changes. The significant difference is that everyone works on the same data. Each file overwrites while keeping a history to be able to go back. To avoid conflicts, Perforce allows you to lock a file if someone else is editing it. This allows several users to work simultaneously.

We learned a lot from this small project. We were surprised by the smooth exchanges between departments, which increased iterations. It even revealed new workflows that were previously impossible.
Being able to work in the context of a sequence and visualize something close to final render in the viewport saves a lot of time and reduces mistakes.
Lighting with Lumen was reassuring, with excellent quality. Especially with a rendering time for the longest image of 12 seconds without optimization (we estimated an image of the same quality in Renderman at 45min). The shaders are not so different from what can be found in a classic pathtracer. However, we noticed that Unreal required quite a bit of adjustment to achieve a cinematic rendering.
It was also surprising to see the artists’ autonomy even though we hadn’t developed a pipeline. We only had a few scripts to import the assets into the project and generate the sequences. Perforce supported a significant part of the workflow. In the end, the artists quickly assimilated its use. They had more trouble understanding that each asset in an Unreal project was a separate file, and everything was interconnected (unlike Maya, where there is only one file). Another advantage of Perforce and the file structure in Unreal is that if there is an error, it is straightforward to identify the problem and undo the issue.

In conclusion, this project reassured the teams and made them want to go further by doing other tests.

3DVF: You then tried to import data from series like “My Dad the Bounty Hunter” and “Trash Truck.” And after a week of work to handle the shading, lighting… you succeeded! Of course, you took a few shortcuts (there are no hierarchical links between 3D objects, for example). What were your rendering times compared to RenderMan? And what are the visual/technical limitations?

Given the rendering quality of our first test, we wanted to challenge ourselves by adapting a scene from My Dad The Bounty Hunter: The mother’s apartment seen in the first episode.

To work faster, we wrote an exporter in Maya for the position of each asset in world space without considering the hierarchy. We then wrote a scene assembly in Python within Unreal to import each geometry as a “static mesh” with their textures. We had previously prepared some “Master Material” (some kind of uber shader) in Unreal. Each asset would use a “Material Instance” whose parent is a Master Material. So if you modify the Master, all its instances are updated. The scene assembly takes care of generating a Material Instance for each asset and assigning the textures. The setup took us a week.

Surfacing and lighting also took about a week, as we continued to learn to use the engine. We didn’t try to match the reference perfectly, but we were already delighted with the result.

The Renderman version took an average of 1h30 per image to render, not counting RIB export time, denoising, pre-comp (dof,…). Moreover, some cheats were required to optimize rendering times (bounce lights,…).

The version in Unreal renders the image in 3 seconds, without any lighting trick. The result is impressive. We are amazed at what we can do after 6 weeks of getting to grips with Unreal.
However, we noticed limitations in the quality and detail of SSS and refraction and light filters. The lack of a subdivision system requires upfront thinking when it comes to assets and can create certain constraints, especially with hair.
By default, Unreal produces beautiful images for video games, but there are many variables to adjust to approach the quality of a pathtracer. Unfortunately, these settings are poorly documented for now.

3DVF: A question for Joachim Guerin: As a Lighting Supervisor at Dwarf, what are your thoughts about the lighting tools under UE5, compared to what you’re used to?

Unreal Engine is quite similar to the usual engines, with some additional limitations. For example, the “light filters” which only work in black and white. Or the “light linking” that is limited to 3 channels. But this is inherent to the optimizations required for real-time. Therefore, some habits need to be reconsidered to work around them.

There are certain advantages when it comes to lights, such as being able to modify the volume independently from diffuse lighting, Global Illumination, and specular. This is similar to Arnold.

We have tools available like the “Light Mixer” that allow us to control the lights of a scene from a single interface. The “Light Functions” and Blueprint allow us to easily add functionalities that don’t exist by default.

It goes without saying that real-time offers a genuine benefit and a lot of enjoyment due to its interactivity. The result in the viewport is very close (if not the same in some cases) to the final render.

To conclude, the construction of shots and sequences that we have been able to implement in Unreal makes the sequence lighting smooth and natural. Combined with the fact that the rendering is fast, we can more easily work on the lighting animations within the Sequencer.

3DVF: Did you stumble upon issues when trying to use effects used for a series like “My Dad the Bounty Hunter”, such as SSS, hair?

Clearly, yes. In every software, these topics are challenging to handle, each being unique. As mentioned earlier, in Unreal, we have limitations in the accuracy of SSS and issues with tessellation for hairs. We also need to spend some time setting blockers to avoid flickering caused by Lumen.

So far, we haven’t tested the FX part in Unreal too much. However, there are various approaches that allow achieving interesting results at a lower cost. One of the biggest shortcomings in the engine is the integration of volumetric systems, but this is changing with VDB support improvements planned for version 5.3. We are eager to test them.

3DVF: Same question for heavy elements, like a city shown in the series.

Regarding a city, the amount of data isn’t really an issue in Unreal. The main question we’re trying to answer is how to build it. Several different techniques can be used, and the whole point is to determine which one meets our needs and to take the time to test them.

For the record, the main city of My Dad the Bounty Hunter took nearly 5 minutes to load in Maya and could not be displayed in its entirety. Thanks to Nanite technology and other optimizations, we can open this same city in its entirety in a few seconds in Unreal and navigate at 60 fps in the viewport (without any shader).

3DVF: You then created a short film. The animation was done in Maya, then exported in Unreal using Alembic, and rendered in Unreal. 6 weeks from storyboard to final image, without using UE5’s path tracing.
What was the size of the team? What did you learn?

New project, new challenges. At that point, a month and a half had passed since the start of this adventure, and we wanted to delve deeper into our tests. We decided to create a new one-minute short film to be completed in 6 weeks, from storyboard to final image. This time, the animation would be done in Maya so that we could test the Alembic workflow.

The team consisted of roughly 10 people. But not all of them worked in Unreal. Animation was mainly done with Maya. Modeling and textures were done in Blender, Maya, and Substance Painter, respectively. Grooming was done in Houdini. Surfacing, Lighting, and In-Camera VFX were handled by the same artist in Unreal. We once again forbade the use of Nuke to delve deeper into Unreal’s post-processes.

This was the first project where assets were designed from the outset for Unreal. Whether it was modeling, textures, or set dressing. This allowed us to test new scene-building methods to optimize them further. Our assets were much more modular, and we better utilized instancing.

We were also able to rethink how to organize a production. The ability to work in parallel and with the same data allows for smoother workflows and identifying where to focus efforts. Elements usually only seen during compositing (fog, visual mood, lights) can be set up from the start of production and accessed by all departments. It’s a huge productivity gain that requires revisiting traditional scheduling.

We also faced several issues. For example, the advanced use of the Sequencer showed that we still had things to learn about the architecture of Sub-Layers and their hierarchy system. This caused many override problems that took us time to identify.
The Geometry Cache system via Alembic didn’t function as we were used to in offline rendering, its use isn’t straightforward. Besides the numerous bugs, the Geometry Cache approach isn’t the most flexible. We will try to avoid it as much as possible.

Technical difficulties of this project, combined with the limited time we had, led us to develop several Blueprint tools to automate our workflows (animation imports, asset constructions). Integrating these tools into the engine doesn’t require a mandatory “Pipeline” phase. The artist and/or the TD can create and implement tools to manage data without impacting production.

Each project allowed us to learn more and understand the workflows we could implement. Moreover, each of them gives us a larger dataset that we can use to test other architectures, designs, optimizations. Reuse is very simple in Unreal.

CBB – Dwarf Animation project created using Unreal Engine (6 weeks)
Credits:
Chief creative officer / Producer : Olivier Pinol
Artistic Director: Benjamin Molina
Character Artists: Fanny Georget, Maximilien Legros-Auroy
Environment / Props artist: Xavier Lecerf
Rigging artist: Damien Vie
Hair artist: Benedicte Aldeguerre
CG Sup / Previez: Belisaire Earl
Layout Artist: Jean-Charles Dechatre
Animation artists: Jean-Yves Audouard, Benjamin Molina
Lookdev / Lighting artist: Joachim Guerin
Sound Design: Maxence Pinol
Editing: Thais Churel
CTO: François Tarlier
Unreal technical support: Julien Frachisse, Cedric Paille, Pete Reilly, Maxime Robinot, Damien Vie
Studio Manager: Cecile Merat
CIO: Laurent Doit
IT support: Jeremy Cornille
Human ressources: Christopher Houlenga
Accounting: Damien Galles, Carole Prigent

Special Thanks to our friends at Epic Games – UDN

3DVF: Your project around Unreal, from the start, was to work without compositing: how did your compositing team react to this desire? What was their involvement in the project?

For these projects, we didn’t want to go into Nuke to add certain effects that we could directly do in the engine. However, these steps remain necessary. Whether it’s up to the lighter or a compositor, what changes is the tool.
The idea was to assess Unreal’s capabilities to define its limits and know when Nuke would be essential.

In conclusion, with these kinds of tools, we see a convergence of certain professions. One of our compositors told us he had stopped lighting because it took too long, but seeing the capabilities of real-time made him want to get back into it. However, we will not prohibit the use of Nuke in our future productions.

3DVF: As an animation studio, you have data exchanges to manage. Why wasn’t USD relevant for your tests, and what guided your choice between Alembic and FBX?

USD is a complex topic, as broad as data management in Unreal. We didn’t want our approach to Unreal to be influenced by our habits from pre-calculated and USD. We decided to focus on UE rather than spreading ourselves thin across two technologies.

Regarding its use within Unreal, it all depends on what you mean by USD. If we’re talking about the file format, like Alembic or FBX, when we started these projects, we were on Unreal 5.1.1. USD in this editor didn’t yet support Geometry Cache animation.
On the other hand, if we’re talking about the scene composition system and its override, then a real question arises: If we were only working in Unreal, what does USD offer? After using this editor for several weeks, we realized that its file system and override offered similar possibilities. Beyond exchanging files between DCC, USD tries to solve problems that video game editors have already partly solved.

That being said, it’s clearly a topic we’ll study for our future productions to streamline data exchange with departments not yet working in Unreal, like the animation department.

3DVF: You chose to pay for an Unreal Developer Network (UDN) license: in other words, you’re paying for support and resources. What’s your feedback on this? Isn’t the fact that you’re not a video game studio an obstacle in your interactions with the Epic team?

We opted for a UDN license to have a direct communication channel with the support team, and not be limited to public forums. The entire team is pleased with this interaction.

Initially, we had communication issues as our UDN contacts were very game-oriented. So, we had slightly different terminologies, but that quickly changed.

The teams involved with the UDN are both responsive and precise in their answers. Despite the fact that our questions were sometimes naive at first, they guided us well and met our needs.

Lately, our interactions with them have become more in-depth. We delve deeper by providing feedback on our experiences and bug reports. Recently, we submitted a dozen changes to the Alembic importer’s code to make it more stable and user-friendly in production. These changes are currently under review with them and will likely be integrated into version 5.4.

3DVF: Rendering times of 5 to 10 seconds obviously change the way you work. How has your workflow evolved compared to your habits? For instance, regarding retakes, dailies? Or do you tend to tweak things even after they have been approved? (French animation studio MIAM! Animation explained to us that since adopting Unity for their animated series, Edmond and Lucy, they created a “fine-tuning” stage, with elements added at the last minute, or even major camera changes)

Being able to light an entire sequence in a single timeline simplifies workflows and reduces continuity errors. We could render the entire sequence every night with the latest data from each department. This allowed us to review the shots in their entirety.
The review process takes on a different dimension, as we’re less hesitant to synchronize all the departments. There’s less risk than with regular renderers. We can review directly in the engine if necessary. It reminds us of the Flame workflow where we could apply retakes directly with the director during the review.

3DVF: With such low rendering times, are you tempted to push the rendering quality, even if it means going back to 15 or 20 seconds per frame?

That is indeed one of the pitfalls we are striving to avoid at all costs. We have developers constantly seeking optimizations in our scenes, using profiling tools.

For now, we’re still euphoric about the extremely low render times. But you quickly get accustomed to this convenience, and you end up forgetting that once, it took 1h30 to render a single frame.
When this new approach becomes the standard for productions, going from 5 seconds to 10 seconds per image doubles the waiting time for a shot, and the indirect costs that come with it. For instance, for a 1-minute sequence, at 3 seconds per frame, it takes just over an hour to render on a single machine. The artist can still run it locally. At 5 seconds per frame, it takes 2 hours, but at 10 seconds per frame, it will take about 4 hours. While it’s still very low, it quickly becomes evident that a render farm is necessary. That means fewer iterations for an artist, resulting in a loss of working comfort and increased production costs.

We aim to optimize everything as much as possible so that artists maintain full interactivity in their workflow while adhering to our quality standards.
However, for the final render, engine settings are pushed to enhance the image quality. For instance, Unreal will render the same image 16 times to achieve high-quality motion blur and anti-aliasing. So, losing a few fps on a poorly optimized shader can significantly impact the final render time.

3DVF: Does RenderMan still hold advantages? For instance, in terms of pure rendering quality, can Unreal match it?

Lumen in Unreal version 5.2 can’t fully compete with a pathtracer, especially when it comes to previously mentioned topics (SSS, refraction). Since Unreal is a video game engine, it’s more about approximation than precise computation.

However, this isn’t necessarily a drawback; it all depends on your needs. What kind of project you are working on, budget, and desired quality define what’s acceptable. If we were to make Planet of the Apes today, it would seem challenging to envision it in Unreal. But next year? The year after? Things move so swiftly in the real-time world that it’s hard to fathom the engine’s future capabilities.

Even today, we have Unreal’s pathtracer available, and we see it improve with each version. Recently, volume systems, the new Substrate shader model, and virtual geometries (Nanite) were all added in the last 6 months. We’ve observed its use by MPC Studio in their short film Brave Creatures. We’re keen to study it as well.

3DVF: Similarly, with very heavy/complex scenes, are there situation where GPU rendering using Unreal might end up crashing, whereas a traditional engine like RenderMan would work? Have you encountered such issues?

It’s a bit difficult for us to comment on that. All the scenes we’ve produced to date are rendered on older 3D cards (Quadro RTX 4000 with 8 GB of video memory). Some are a bit more challenging to render than others due to memory limitations, leading to artifacts. However, we’ve always found a workaround, and it hasn’t genuinely hindered our work. We believe that with the latest generation 3D cards, these issues shouldn’t be as prevalent because of the significantly higher available memory.

We’re in the process of adapting one of the largest scenes from My Dad The Bounty Hunter and haven’t encountered any major obstacles yet. So, for now, we haven’t had GPU-related problems, but it’ll likely happen someday.

Regarding scene construction, we’re currently studying various approaches. We’re using resources provided by Epic, like the citySample (Matrix demo), Slay Animation, or ElectricDream, to guide our thinking. We’ve derived several concepts, each with its pros and cons. Some offer user benefits, while others are more optimization-focused. By not restricting ourselves to a single general construction solution, we can best harness the engine’s resources.

3DVF: RenderMan has been improved a lot recently, for example with the introduction of hybrid XPU rendering: what are your thoughts on this technology?

Despite our recent research into real-time, we remain a studio that has historically relied on more traditionnal rendering. So naturally, we’ve noticed the improvements in Renderman. XPU has been in development at Pixar for years and gets better with each version. All offline renderer developers are integrating this kind of technology to improve workflow in terms of lighting and visual development.

RenderMan 25 – main features

Even today, there are limitations on Renderman 25 XPU which makes it a good tool for visual development, but not realy usable for final rendering. To name just a few limiting factors, XPU does not support light linking, light filters, or mesh lights. These elements are important, if not essential for artistic productions like what we do with My Dad the Bounty Hunter.

RenderMan 25 – RIS rendering (the current default renderer) vs XPU rendering (which will end up replacing RIS)

3DVF: How do you see the future of rendering at Dwarf? A complete switch to UE? A dual UE5/RenderMan pipeline? Something else?

We conducted these tests to determine if some of our animated series can be produced with this type of technology. Our conclusion is yes. It’s still hard to estimate how far we can push a real-time engine and what will be possible in 6 months. We never thought we’d come this far. Depending on the series, quality, and budget, we’ll choose the most appropriate pipeline. So, for now, we want to maintain a an offline rendering pipeline for projects that we don’t consider achievable with a real-time engine.

3DVF: And what elements are still missing or can be improved in Unreal for an animation studio? In other words, what do you hope for in future updates? What about animation tools, for instance?

The answer varies depending on the production needs. But the biggest bottleneck we’ve encountered so far is animation. Between the Roger project (our first full UE test) and the CBB project (animation done in Maya), we felt a huge difference in production fluidity. Importing animations via Geometry Cache is a significant blocking point, slowing down the entire production. Nothing is impossible, and it’s no more complicated than a traditional production pipeline. But the day animators can work on the same data as other departments within the engine will be a real game-changer. We’re not far from that.

From the lighters’ perspective, many things also need improvement, and they’re eagerly awaiting them, especially Lumen’s refraction or a stable version of Substrate. From a workflow standpoint, VDB importers and a good Us versioning management system will be essential in the future.

Each minor version of Unreal brings significant changes, forcing us to constantly reevaluate and test new methods. UE 5.1 brought us Strata (renamed Substrate in 5.2), a more modern shading model. UE 5.2 introduces PCG, a powerful procedural scattering system. UE 5.3 will offer SVTs, allowing the import of real volumes into Unreal from Houdini. These systems are experimental today but are improving with each version and promise significant advancements in the coming years.

3DVF: What are the most significant impacts from a production standpoint when using Unreal?

Game editors like Unreal Engine have the advantage of being more than just a rendering engine. We often talk about rendering times because the difference is staggering. But what we’ve found even more impactful is the iterative and collaborative aspect. Meaning, even if rendering took a long time, working within these editors would still be beneficial.

In a traditional precomputed workflow, a significant amount of time is spent on tasks other than rendering. If we take Renderman as an example, several tasks surround it. Before starting rendering, there’s a cache export which takes time and other file exports (abc, rib, textures, etc.). Then comes the multi-pass rendering, followed by denoising, and then recombination into a single file. Not to mention the export of animation caches, scene creation at each update, and all the pipeline processes to transpose data between departments.

With Unreal, multiple artists can work simultaneously on the same shot without stepping on each other’s toes. They can easily stay updated without waiting hours for scene generation to incorporate their colleague’s work. They can identify a problem early in the production chain and fix it immediately. Immediate visualization eliminates the need for pre-renders to check scene assemblies and cache generation. For example, for a lighter to get the very latest update an animator just published, they only need to launch a Perforce synchronization.

In conclusion, the release of Unreal 5.0 changed workflows with the introduction of various technologies, affecting our production habits.

Specifically, talking about Nanite, we can now have scenes containing thousands of assets, each made up of millions of polygons while still maintaining 60 fps. As a result, the work required of the Environment department has changed. Historically, environmental geometry work was driven by a need for simplification. If a polygon and normal map were enough for a wall, we stuck with that. Now, artists can import the sculpt directly into the engine, significantly enhancing object silhouettes without affecting performance.

Another example is World Partition, which introduces a new approach to scene structuring. This allows us to consider much larger scenes. A 2km x 2km or 10 km x 10km terrain is now feasible, simplifying the construction of cities or vast landscapes. While this adds technical complexity, the productivity gains are obvious.

These elements will enable us to continue meeting the demands of increasingly challenging productions in terms of production time, complexity, and budget.

3DVF: One last thing. You’re working on setting up an Unreal training program based on your experience, which would then be used for new artists entering the studio… But perhaps also offered to other companies? What will be the focus, and what form will the training take?

The primary focus will be to offer professional training for animation production professionals and studios wishing to train their teams. We realized that using Unreal in our pipeline completely transformed our way of working and tracking production. So there will, of course, be training on the tool itself but in a pure production context. It will be a short course, open to all industry players.

This model allows us to address the skills needed in the face of increasing demand from producers while optimizing budgets and production times.

3DVF: Dwarf Animation, thanks a lot for taking the time to answer our questions!

We will of course keep following upcoming Dwarf Animation shows and projets. Don’t forget to follow 3DVF on Youtube, X/Twitter, Instagram, LinkedIn, Facebook, so that you don’t miss our upcoming content!

You can also read the Dwarf Animation page on 3DVF, as well as their official website.

Leave a comment

A Lire également