Dying Light 2 Stay Human - Gut Feeling
Accueil » Rédactions » Dying Light 2 Stay Human: how Techland revamped combat animations & gore effects – interview

Dying Light 2 Stay Human: how Techland revamped combat animations & gore effects – interview

This article is available in: French

A few months ago, we interviewed game developer Techland about the art of Dying Light 2 Stay Human. As a reminder, this is action role-playing survival horror features an open world and hords of infected.
Since our last article, the “Gut Feeling” update improved the combat animations & physics, as well as gore effects. We took this opportunity to get back un touch with Techland, and to interview them about their animation pipeline, their use of motion capture, but also… How to dismember an infected, from a technical point of view!

Dawid Lubryka – Animation Director and Adam Michałowski – Senior Game Programmer answered our questions.

3DVF: Dying Light 2 Stay Human keeps getting new updates. The new one, nicknamed “Gut Feeling“, upgrades the combat animations & physics, among other things. What made you decide to overhaul this part of the game?

Dawid Lubryka: Quality and bringing more detail into combat animations and physics have always been crucial for us. The “Gut Feeling” patch marked a significant milestone in our efforts. Even before that, we were already incorporating new animations and refining the existing ones in combat.
However, we decided to shift our focus towards combat mechanics and physicality, as this was an area we recognized as needing refinement. Notably, this was the aspect that our community was actively discussing and providing feedback on as well.

3DVF: Could you tell us about your overall animation pipeline?

Dawid Lubryka: Our pipeline has changed since the release date. We now have a strike team dedicated to identifying and improving gameplay and animation issues, based on both our upgrade plans for the game and community expectations. Depending on whether the animations are first-person (avatar animations) or third-person (enemies), our approach varies slightly. We generally use hand animation for first-person animations. For enemy animations, we heavily rely on motion capture recordings, working in cooperation with actors and stunt performers. In both cases, we have multiple stages of completion:

  1. Prototype stage – animations are prototyped and implemented in the most basic way to test if the ideas work, this is done very quickly, so we either use very rough blocking for animations or motion edit existing animations to match our needs.
  2. Alpha stage – after making sure the idea works, we gather feedback and improve the quality and detail of movement. For enemies, at this stage, we usually organize motion capture recordings and cleanup, and motion edit the results.
  3. Polishing stage – the last stage is to make the animations more refined – this involves correcting poses to match with the entire game, refining arcs, and adding a “punch” to animations.

Animators on the strike team work closely with gameplay programmers and QA to implement new animations where needed. After that, we spend time playtesting with the entire strike team to ensure we’re on the right track. Usually, achieving the best results takes multiple iterations. Creating simple animations for an example feature can take a few hours to achieve good results, but perfecting them requires repeating that step several times and considering feedback from all parties involved.

Dying Light 2 Stay Human - Techland

3DVF: You added new ways to slice, cut, dismember limbs. How did you approach these animations at the concept and then animation stage? And how did you decide what to keep, add, revamp?

Dawid Lubryka: Our animation and gameplay systems are very robust, allowing us to separate most of the artwork on body destruction models and animations. The foundations of the systems were in place even for the release back in 2022, but we knew that many simplifications had been made and corners cut. Our main goal was to make Dying Light 2 Stay Human more brutal and realistic by better linking the players’ actions to the enemies’ reactions.
The system for animation reactions is very deep and detailed. In many cases, the physics engine – ragdoll mechanics – provides good results, improving emergent gameplay. It combines the physics of the enemies with the world mechanics, resulting in a great sandbox experience. However, not all hit reactions result in a ragdoll effect. In such cases, we need to rely on animations… in case of hit reactions – a lot of animations. To put this in perspective, imagine this example for one of the most common enemies in the game – the Biter.

reaction type * weapon type * weapon direction * body part * hit phase (enemy stamina) * player direction in relation to the enemy

That calculation easily generates around 1000 animations, not even including the variations for the most common cases. To make this process feasible, we needed to make a few compromises. For example, most dismemberments can happen on any hit reaction, during ragdoll, or can play its unique dismemberment animation (e.g. head cut reaction) – that saved us a lot of animations but also helped with emergent gameplay.

3DVF: From a technical standpoint, how do you handle dismemberments?

Adam Michałowski: To support the dismemberment of many different enemies, we have separate hierarchical cutting zones defined for each enemy archetype. From a technical standpoint, when cutting a limb, the cut part on the main body is hidden using shaders, and the surface of the cut is covered with a matching patch mesh. Additionally, the mesh of the cut-off part is created along with appropriate visual and sound effects. A similar solution is used when performing holes in the body or head destruction.

Dismemberment - Techland

3DVF: How did you know how far to push the gore aspect of these animations? Did you also have to backpedal sometimes, when an animation was deemed too gory?

Dawid Lubryka: Well, we did have a few moments when we went too far, in our opinion, and had to backpedal. It wasn’t so much that it felt too gory, but rather because it felt inconsistent with the world we were depicting. One example of this was our back-and-forth regarding the strength of hit reactions and ragdolls. On the one hand, we wanted to make these moves impactful, delivering a satisfying punch. However, we reached a point where each attack felt excessively powerful, which didn’t align with the variety of contexts we needed to cover in the game. With a wide array of weapons, regular attacks, power attacks, cool skills, like dropkick, and the need to leave room for future events, we realized the importance of finding the right balance instead of going to extremes.

Dying Light 2 Stay Human - Techland

3DVF: What were the most challenging animations/elements you worked on for this new update, from an artistic/technical point of view? And how did you manage to overcome these challenges?

Dawid Lubryka: Dying Light 2 Stay Human is an expansive open-world game where various player mechanics, AI systems, and the open world itself intersect and overlap. As a result, modifying one element often triggers a cascade of changes throughout interconnected components and dependencies. Altering a single aspect can have far-reaching effects and necessitates careful consideration and adjustments to maintain a cohesive experience. Working on changes such as those for the new update reminded me of untangling knots, and because the game is out, we need to be careful to improve the experience – not to break it.

Dying Light 2 Stay Human - Techland

3DVF: AI is a hot topic at the moment, including when it comes to real-time animation. For example, Unity showcased some AI & Physics assisted character pose authoring tech last year at SIGGRAPH, and animation tools such as Cascadeur have begun relying on AI. What is Techland’s take on these new technologies?

Dawid Lubryka: We’re looking closely at the state of the industry and all the new technologies. Personally, I think that AI will revolutionize the way we work at some point in the future, but as of now, in the field of animation, I would call it more an evolution rather than revolution. All in all, it’s just like when motion capture technology brought a huge change to the way we work many years ago, but in my opinion, it is just a tool in the hands of animators and game developers – and we definitely are constantly looking for new ways to improve our toolset, speed up our workflow and, most importantly, improve the quality of our games.

Below: Cascadeur demo & AI & Physics assisted character posing by Unity

For more information

Laissez un commentaire

A Lire également