This article is also available in: Français
Back in 2020, the Dada! Animation team officially announced the creation of their company during the Annecy Festival (the company was still named Hue Dada! at the time).
Three years later, also at Annecy, we sat down with Quentin Auger, co-founder and head of innovation at the Dada ! Animation, to discuss their latest projects.
We asked him about the use of AI and real-time at Dada! Animation. Topics included style transfer, Unreal, volumetric capture, immersive technologies, and a broader discussion about the challenges raised by generative AI.
3DVF: Hello Quentin! Since our last interview with Dada! Animation, the company got a new name. Why did you change it?
Quentin Auger: The original name was inherited; we had joined an existing structure. Last year, just before the Annecy Festival, we conducted an assessment, a sort of audit of our image. That’s when we decided to change our name and graphic identity.
The name change was aimed at shedding a somewhat childish image and a name that was too French, not understood by foreigners (they couldn’t pronounce it, and the meaning wasn’t clear). Instead, we wanted to focus on “Dada,” paying tribute to the eponymous artistic movement and the values of continuous creative movement it represents.
3DVF: We’re conducting this interview right after the “Real-Time & AI: A Game Changer for the Animation Industry?” conference at the Annecy Festival. You were part of the panel.
Among other things, you talked about They were millions of them, a documentary project for Seppia / Indi Film / Arte.
Dada! Animation used EbSynth, which we discussed on 3DVF when it was launched. At the time, in 2020, that quite a surprising tool!
Yes, it’s a peculiar software, somewhat unique in its kind. At the time, it caught people’s attention but then faded from the limelight. We brought it back because it fits our needs for style transfer, something often done through rotoscoping (filming a sequence and then redrawing it in a different style).
We wondered if EbSynth could be helpful. This kind of research and technological adaptation to various animation challenges is what our “Digital Lab” loves to tackle. It plays a central role in the development of new projects that Dada approaches, especially since we advocate for the “decompartmentalization” of animation, applying it to new use cases like documentaries and XR.
Interestingly, since then, with the rise of Generative AI and tools such as Stable Diffusion, as well as the surrounding ecosystem (ControlNet, etc.), many people are doing style transfer. Some use “warping” AI like EbSynth (which isn’t generative) because it tends to stabilize the results. That’s its advantage, but it also tends to leave some elements that should disappear, distorting the images rather than transforming them.
3DVF: Let’s briefly recall how it works: EbSynth takes a video sequence as input, along with keyframes where the desired style has been applied, and then propagates this style throughout the entire sequence.
Exactly, EbSynth attempts to stabilize the style of the keyframes across the video frames. The result is quite different from many generative AI, which often generate nightmarish, feverish videos with incoherent transitions between frames.
In our case, we didn’t use Stable Diffusion, which hallucinates an image for each frame. Instead, we used the touch of a real artist for the keyframes, and the input video is 3D animation generated via Unreal Engine.
We realized that for EbSynth to work well, it’s better to have well-defined areas, to disable effects, and the atmosphere. It’s even preferable to disable shadows to allow the artist to draw them as they wish on the keyframes. There’s no need for realistic colors; what matters most is that the areas are well-defined. In the end, you don’t need to use the full potential of Unreal Engine.
So, in essence, we intentionally create “ugly” visuals in Unreal because this is more suitable for our workflow! [rires]
The project is currently at the pilot stage, and a significant portion of the funds needed has been raised, in part through French financial support.
3DVF: Let’s also discuss VRtist, a research project we discovered during Laval Virtual a year and a half ago at the Irisa/INRIA booth. Dada! Animation was involved.
In a nutshell, VRtist is a VR-based keyframe animation tool created by scientists to study user experience issues in this context.
VRtist was part of a larger project (the “Invictus” project, for which we created the trailer) that involves both laboratories and industrial partners, such as a German university and an Irish company. This project also involves volumetric capture: a limitation of this technique, which is a bit like video photogrammetry, is that it’s challenging to edit it after the fact.
3DVF: Yes, we had the opportunity to interview 4DViews, an expert in the field, which offers both services and tools for studios. Their 4Dfx tool can be used to edit volumetric video, but it’s quite limited: you can add deformations, change the gaze of a character, or add transitions between sequences, but there is no advanced animation.
One of the project’s challenges was precisely to edit volumetric video afterwards. Which meant we had to rig and edit these elements. For example, in VR.
We joined the project quite lately. Ubisoft Motion Pictures was an industrial partner for VRtist, bringing their expertise and specific use cases. However, this branch of Ubisoft closed down during the project, and the consortium of researchers tried to find other animation experts interested in taking over. They contacted us, especially since we were already collaborating with IRISA in Rennes on other projects. We joined as service providers (as it was too late to modify the consortium’s composition once the project was underway, hence this role as a service provider).
However, they were only planning to do tests on bipedal animation, human characters, which suited Ubisoft’s needs. But we have a non-realistic approach, with small monsters, diverse characters, cartoon projects, etc. So, we had many other challenges, particularly related to rigging. However, this issue had not been considered at all in VRtist. How do you rig a joint chain that is not an arm or a leg, for example, to manage a tentacle?
So we listed our own challenges, from the simplest to the most complex (moving an object in space, deforming it, creating one or more joint chains, rigging various things…).
Of course, they couldn’t change their roadmap for VRtist since we were already in the last year of the project and they had to start testing and reporting. So we decided to do the rigging using Maya, and then we used a tool to transfer the Maya rig (with a high-level approach, including controllers, sliders for animation, associated with joints and blend shapes) to Unity. The controllers appeared in Unity and were also valid there. The IRISA team working on VRtist could then edit the movements of these controllers in addition to those of the joints.
This allowed them to focus on their original goal while we took charge of rigging. We provided our use case by creating animations for several of our characters.
3DVF: And what’s the takeaway on your end?
They were… tools created by researchers new to animation! [laughs]
I found it fascinating, with a lot of goodwill from both sides and a lot of intellectual curiosity. We enjoyed it so much that we will continue to work together.
We taught them the basics of animation, such as applying the 12 principles of animation, etc. We introduced them to our head animator, who gave them some lessons and tested the tools. At the same time, the same tools were tested on students from Creative Seeds, a digital arts school based in Rennes, France. This way, the tools were tested by animators with varying levels of experience, from beginners to professionals.
The results show that these tools are very practical and suitable for people who are just starting in animation and want to do simple things. For professionals who need very fine control, there are exciting and fascinating possibilities. Being able to move elements by hand in space is fantastic. However, animation is not limited to that; it’s not just about curves but also about animating facial expressions, constraints, timing, etc.
There’s still work to be done, but it’s promising and a lot of fun.
The study has been completed at their end and they did a report a few months ago. The team even received commendations from the jury for the entire project (which goes way beyond VRtist). The commission was apparently impressed by the harmony and collaboration within the consortium.
This inspired us to continue working with IRISA on tools that use AI to assist with keyframe animation, whether in VR or not, and other elements that come into play, such as sound and gestures.
Like VRtist, the result will be a prototype, and the project will be open-source. We are also planning to release our own open-source projects. These will be pipeline-related.
3DVF: During the conference, you briefly mentioned another Dada! Animation project, “”Between Two Worlds of Voodoo”. It’s an upcoming documentary that combines LIDAR, visual effects, 360-degree videos, and real-time technology. Could you tell us more about it?
It’s another project by our Digital Lab, again for Seppia, which I mentioned earlier when talking about EbSynth. We really like working with Seppia, and the feeling is mutual.
They are working on a documentary about voodoo religion. To raise funds and create the pilot, they needed a proof of concept to demonstrate that their vision can turn into pictures. It will be an immersive documentary in 360-degree video, with some parts filmed on location in Benin and Haiti.
When the went to Benin, they shot immesive videos of voodoo priests and people in the streets. They also used our iPhone Pro and its LIDAR to capture volumetric video at the same time as the 360-degree capture. So, from the same video session, we have both immersive video and LIDAR data.
As voodoo blends the familiar world we know with a parallel world of spirits, Seppia wanted to show this second world, to bring it to life using LIDAR to add volumetric effects.
They wanted to see if Unity could be used to create and blend these effects into the 360-degree video, then generate a new 360-degree video, and make sure that the compositing would work visually.
So, we handled the technical part, and it works beautifully! It’s a very fun project for us. Seppia is currently raising funds to create the pilot.
3DVF: During the conference at the Annecy Festival, you also talked about the ethical aspect of AI and its impact on the quality of tools. On one hand, some solutions have amassed colossal amounts of data without asking artists and therefore offer impressive results. On the other hand, there are tools from NVIDIA and Adobe that aim to be more ethical, which means they have less data to rely on.
Moreover, they may not have launched AI tools as early as other companies.
It’s a complex issue, especially since every day a new information comes to light. For example, Adobe highlights the ethical aspect, probably in part to avoid lawsuits. At Epic, artists are forbidden from using AI, and they must even prove that they are not using it. Artists working on the other side of the world must show that they have made the concepts, for instance. The goal here is also to avoid legal issues.
And actually, Adobe says that it is possible to create AI without the fear of lawsuits because their AI under Photoshop was trained on Adobe Stock images. In other words, artists presumably granted the necessary rights to Adobe.
The problem is that Adobe Stock contributors have probably used their own works, and some of them may have been created with AI. So, the issue of copyright comes back as a Trojan horse!
3DVF: Another issue is that there have been instances of people successfully uploading photos/pictures -without owning any rights on these pictures- on stock websites. It’s similar to the recent case of a database of 3D models that were shared by Sketchfab users under Creative Commons license, although some of these models are definitely not theirs (for example, 3D assets ripped from Nintendo games).
Yes, there stolen artworks, but I think Adobe Stock has tools to avoid such issues. However, with images created using AI, it’s more challenging to detect.
And it’s a difficult topic because technically, you cannot copyright a style. You can only copyright a piece of art. The problem with AI is that it can generate images imitating an artist’s style.
We should highlight that for their generative AI, Adobe prevents citing artists’ names in a prompt to avoid such generation.
However, a while ago, an artist managed to use Adobe’s AI within Firefly (the same one available in the beta version of Photoshop) to create images that are very, very similar to his style. He started discussing it on social media.
The fundamental problem is to curate Adobe Stock well when training the AI, so as not to include images already generated by AI.
3DVF: Yes, once again this is similar to the case of assets scrapped from Sketchfab. Since there was no human filter, they ended up with many models presented as royalty-free that are not.
A possible solution is to let studios generate their own data to train AI. This is the approach taken by Golaem in a project they presented at RADI-RAF at the end of last year. [Editor’s note: RADI-RAF is a conference which takes place every year in the South of France, gathering studios and school from the animation industry]
Yes, that’s what Blue Spirit talked about during the conference. They adopted this approach internally after discussing it with their lawyers. The same goes for Epic.
And we share the same philosophy: if there’s a doubt, it smells bad!
People also need to use generative AI in a ethical way, refraining from using prompts like “in the style of x.”
It’s complicated because it’s the opposite of what we are used to do as artists in the industry. In a moodboard, we include works by Klimt and Spielberg, etc.
3DVF: This is reminiscent of the way filmmakers use a temp soundtrack, which can strongly influence the end result.
Let me take it a step further; we’re an industry of “forgers by necessity.” When an art director comes to a project and leaves their mark, sometimes inspired by others, they are surrounded by a team of artists from the studio who will respect their style, and so they are literally asked to copy it since the art director doesn’t create the entire project.
In a way, it resembles painting workshops, where a master like Rembrandt was surrounded by other artists, “official forgers,” who were allowed to reproduce his style.
This raises the question of why in the animation industry, we don’t have any issue with artists adopting an artistic direction on a studio project and thus copying that style, but with AI, it’s forbidden. It’s a touchy and complex subject, with many legal gray areas at the moment.
3DVF: We see these debates in the artistic world, while for many years, everyone has been using Google Translate, a tool trained on translated texts, without asking for translators’ consent…
The same goes for code. I’m thinking here of tools like GitHub Copilot, which helps developers. You can train it on your own code, but also on the code available on GitHub. Yet, we haven’t seen such an outcry from developers, and to my knowledge, there haven’t been any requests to be excluded from this training data. (Which is presumably very complicated after the fact!)
We have mainly seen it when it comes to pictures, for voice-over actors and actresses, and barely in music.
3DVF: This is probably due to a mix of law (not everything can be registered, patented) or culture depending on the industry. And the fear some people have that AI might eliminate their job. We have indeed seen studios explicitly using AI for this purpose, and they don’t hide it.
Yes, it has already started. Designers “freed” by their companies because of AI. We experienced the same movement with the arrival of digital technology. Every technology that automates things destroys jobs and disrupts organizations, often involving tasks that are not necessarily so creative or of low quality.
And it’s paradoxical here; we are in the creative sector, hiring artists for their expertise, yet at the same time, there’s an abundance of mass-produced imagery in projects where the artistic, creative aspect is ultimately secondary.
And with the pressure on prices, these roles (not the core professions themselves) are at risk of being replaced.
Paradoxically, this makes the expertise of artists who work on qualitative projects even more valuable.
I’m also thinking here of the arrival of photography, which killed the profession of portrait painters who did non-artistic work. In contrast, there are still painters focusing on the artistic aspect, and we note that they often integrate photography into their process.
The same happened with music synthesis, which eliminated some instrumentalists like violinists who took on small projects. But at the same time, orchestras are even more valuable, they play music at another level of quality.
Each time, automation disrupts jobs (and creates many new ones!)
3DVF: Are we heading towards a contraction of the industry? Alongside an explosion of content, like how digital cameras and smartphones have exponentially increased the number of photos produced but also caused a decline in the number of photographers?
It’s true that now anyone can film with their phone, and filmmakers become even more valuable. We can also recall that the arrival of 3D animation initially led to the closure of 2D animation studios in the USA, while it remained prevalent in Europe and Japan. But eventually, there was a comeback, even at Disney!
3DVF: At Annecy, Disney screened a short film made for their 100th birthday, with a mix of live-action shots and… 2D animation!
And we see that Arcane, Spider-Verse draw inspiration from 2D animation. Paradoxically, after causing a decline in 2D, new techniques lead to its resurgence and highlight its significance.
3DVF: So, in your opinion, are we heading towards an industry in which high-quality studios wouldn’t necessarily have much to fear, but studios producing more low-cost content would face a crisis?
Without legal barriers, yes. Because let’s not fool ourselves; without legislation, ethical barriers won’t hold for long.
I should note that I may have a bias on AI subjects since I was trained in an engineering university. We had courses on the philosophy of techniques, and one of our teachers was Bernard Stiegler, a philosopher who was also the deputy director-general of INA and head of the innovation department [Editor’s note: INA is a French organization tasked, among other things, with storing all French radio and television audiovisual archives]. He noted the difference between work and employment. He said that employment means employing people to assist machines, and the term “employment” was invented during the industrial revolution. Work, for him, is a task in which people improve and develop skills.
The toxicity of work, the concept of “bullshit jobs,” therefore comes from the fact that it mostly involves employment, not work.
And I mention this because people love to see the work of others. When I go to see Spider-Verse at the cinema, I’m witnessing the work of people. But when I see an advertisement, I honestly don’t care much about the job of the advertisers. I see that people were employed to produce an ad.
3DVF: This kind of reflection creates paradoxical reactions among young artists we have met in digital arts schools. Some fear they won’t find a job and, on the other hand, others wish to explore these new tools.
I see the same thing, whether in recruiting or during the conferences I give at schools. Many of them are depressed after spending five years learning skills, spending a lot of money, and seeing that AI gives the same power to anyone in a few seconds.
Or at least, it gives the impression of providing that power: in practice, it’s only “prompt engineering,” the ability to find the right prompts (this might be a new job?). The core of their job, learning how to make and defend the choices they make for each pixel of each frame, remains their valuable skill, regardless of the technology with which they practice it!
3DVF: Some people using AI call themselves AI Artists… Any thoughts on this?
In fact, one does not self-designate as an artist; it comes from others’ perspective. A person “employed,” to go back to the reflection above, to produce images created by AI, is not an artist. The artistic aspect lies in the eye of the viewer. At least, that’s how I see it.
More broadly, when discussing this topic with experienced artist friends, they feel like they are reliving what happened about thirty years ago with the advent of computer-generated imagery (at that time, we used the term “image de synthèse”, a French term meaning “synthesized pictures” rather than “CG” or “3D animation”). There were many algorithms to choose from, many tools, things that didn’t work very well, to the point that animators didn’t feel too threatened until Toy Story was released and changed the game.
But we felt that new jobs were being invented; for instance, I became an animator, a job whose name I knew, but then I became a setup artist, pipeline developer, TD… All these new jobs were invented as needs arose!
And AI indeed reminds me a lot of that period.
3DVF: Quentin Auger, thank you very much for your answers and your point of view! These topics provoke heated debates and generate passionate discussions, so it’s always interesting to have new perspectives. We will, of course, continue to closely follow this field on 3DVF, as well as its implications. And we will of course also have the opportunity to talk about Dada! Animation in the future!
For more information
- Dada ! Animation
- Our article on the creative industry in Rennes, France (in French). IRISA and Creative Seeds are part of it.
- Invictus.
- Follow 3DVF: Youtube, Instagram, Facebook, Twitter, LinkedIn.