animation-driven vs physics character controllers: has anyone actually gone back to physics after switching?

59 views 11 replies

Been rebuilding my third-person character controller from scratch, not for the first time or even the second, and this time I committed to fully animation-driven movement. No RigidBody3D, just CharacterBody3D with move_and_slide() and manually computed velocity every frame.

It genuinely handles better. My last version used an actual rigidbody and I spent more time fighting the physics engine than building game feel. Overshooting on slopes, weird rotational torque on collision, linear damping that never quite worked. Adding a wall jump meant counteracting physics forces I hadn't asked for in the first place.

Going animation-driven flipped everything. Acceleration curves are explicit. Air control is a tunable float, not an emergent property of forces I don't fully control. Landing squash is just a tween on local scale. The whole thing is predictable, which turns out to matter a lot more than I expected.

Now I'm adding a water swimming state and I'm starting to second-guess some decisions. Buoyancy without a physics body is surprisingly awkward. I'm computing a target velocity and lerping toward it based on submersion depth, with extra damping applied on all axes. It sort of works, but the edge cases keep multiplying and it's feeling increasingly like a house of cards.

I'm wondering if the right move is a proper hybrid: animation-driven for ground and air, hand off to actual physics for water, ragdoll, and vehicles, then reclaim control on exit. But managing velocity continuity across the handoff seems like it could get messy fast, especially if the player transitions mid-movement.

Has anyone shipped something with a hybrid control mode like this? How do you keep the handoff clean? Or do you just fake everything and accept that your action game's water physics is going to be, at best, vibes-based?

Replying to ObsidianCrow: That's exactly what pushed me toward keeping a thin velocity accumulator even in...

yeah ended up in basically the same place. also tracking angular velocity separately for turning, felt wrong to snap direction changes with no carry-through at all. the thing i had to figure out was ownership: does the state machine set velocity directly, or write a target and something else integrates toward it? went with the second. state machine declares a target velocity, a separate step lerps toward it each tick with configurable acceleration per-state. keeps the animation logic from needing to know anything about the physics side and makes tuning way easier when you want different states to feel snappier or floatier.

Replying to ChronoPike: One angle I haven't seen mentioned here: animation-driven controllers create a h...

this bit us hard last year. level designer added a ramp sitting right between two of our blend thresholds and the character just walked through it slightly tilted. animator had to cut a new transition, level designer adjusted the ramp, then we realized the camera rig was also reacting to root bone pitch and it cascaded from there. probably three days of back-and-forth that started with one slightly-off slope angle.

still think animation-driven is worth it overall, but this is the real cost. it's not just 'the teams need to communicate more' — you end up with hard constraints on level geometry that nobody documents until someone violates one.

domino effect disaster

the thing that finally made animation-driven click for me was just accepting that slopes and stairs need special-case code no matter how principled your general solution is. spent way too long trying to build one unified system that handled everything elegantly. eventually just said fine, slope angle above a threshold triggers a separate lean blend tree, stairs get a step-up probe with a fixed height limit, and both of those are explicitly flagged states. works great. sometimes the boring answer is correct.

The question I never see asked in animation-driven vs physics debates is what happens at the seams between states. Physics controllers handle transitions implicitly. Momentum carries through, collisions just resolve, you don't have to design any of it explicitly. Animation-driven systems push all of that onto you.

The jump-to-fall-to-land transition? You're writing that. Walking into a wall at a diagonal angle? Also you. It compounds fast if your movement system is open-ended. Animation-driven works really well when you can enumerate all your states ahead of time and design transitions between each combination deliberately. The moment you need emergent movement behavior (slopes, variable-height ledges, physics objects in the path), the manual overhead starts to feel expensive in a hurry.

Replying to CrystalDrift: The question I never see asked in animation-driven vs physics debates is what ha...

That's exactly what pushed me toward keeping a thin velocity accumulator even in an animation-driven setup. Not full rigidbody physics, just a persistent velocity value that doesn't reset on state transitions. Root motion gets converted to a velocity delta each tick and added to the accumulator. When a state ends, momentum carries through naturally. Gravity, surface friction, landing impact, it all just works without the animation states needing to know about each other.

You end up with maybe 80-100 extra lines of code, but transitions stop feeling like hard cuts. The seam problem you're describing mostly disappears because velocity continuity is handled one level below the state machine.

Replying to PrismVale: This is exactly what pushed me toward treating root motion as a cosmetic layer r...

How do you handle the visual correction when the authoritative position diverges from the cosmetic root motion? On a stable connection the drift is probably small enough to catch up invisibly, but on higher latency, or after an actual misprediction, I'd expect a visible slide or pop. Do you lerp the correction over a few frames, or does the gap stay small enough in practice that players don't notice?

I'm working through a similar architecture question and this is the part I keep circling back to. The cosmetic layer approach is clean in concept but I haven't found many detailed breakdowns of what the visual error correction actually looks like from a player perspective, especially at the edge cases.

Replying to NovaVale: One thing worth flagging that doesn't come up enough in this debate: animation-d...

This is exactly what pushed me toward treating root motion as a cosmetic layer rather than a movement authority. The server runs a simple kinematic simulation (explicit state machine outputs, fixed movement vectors) and the client animation system renders that movement convincingly on top. Root motion plays client-side for feel, but actual displacement comes from server state.

You lose some fidelity in the animation-to-movement mapping, which matters a lot for games where footwork and micro-positioning are central. For a tight action game with dodge rolls and precise movement, that tradeoff probably isn't worth making. For an RPG or a slower-paced platformer? I'd take the networking sanity every single time.

Replying to PhantomMesh: How do you handle the visual correction when the authoritative position diverges...

The pattern that worked for us is maintaining a smoothed "display position" that lerps toward the authoritative position each frame, capped by a maximum correction velocity. On a bad connection the character drifts slightly rather than snapping, which most players read as lag rather than a bug. The cap value is the hard part: too conservative and you accumulate drift permanently, too aggressive and it's a teleport by another name. We ended up exposing it as a tweakable float and playtested across simulated packet loss conditions until it felt acceptable. Not elegant, but it works.

One angle I haven't seen mentioned here: animation-driven controllers create a hidden dependency between your environment and animation teams. When movement relies on authored animations to navigate geometry, levels have to be built around the animation data. Stair heights, ledge depths, slope angles all become parameters that environment artists have to respect or things break.

That can actually be a feature. It forces intentional level design and keeps geometry from drifting outside what the character can handle. But it gets painful when those two teams are working concurrently. We had a project where environment artists were blocking out levels before locomotion animations were finalized, and every stair height tweak from an animator broke a bunch of meshes downstream. A physics controller just adapts to whatever geometry it encounters. That flexibility is easy to undervalue until you've worked a production without it.

One thing worth flagging that doesn't come up enough in this debate: animation-driven controllers are significantly harder to network correctly. When position is determined by root motion, your authoritative state is essentially "wherever the animation moved the character this tick," which means your server either needs to run the same animation system (expensive and fragile) or you're doing client prediction and reconciliation on animation state rather than physics state. Both options are painful in different ways.

I switched to fully animation-driven for a solo project and it felt genuinely great: responsive, weighty, exactly the character feel I wanted. Then I tried to carry that approach into a co-op prototype and it turned ugly fast. The CharacterBody3D physics controller I'd been happy to abandon suddenly looked a lot more appealing the moment "deterministic and replicable" became a hard requirement.

Not arguing against animation-driven at all. Just that if multiplayer is anywhere on the roadmap, that constraint should probably factor into the decision early rather than as a painful surprise six months in.

Replying to CrystalDrift: The question I never see asked in animation-driven vs physics debates is what ha...

This is the part that gets buried in most of these comparisons, and it's what actually matters. The seam problem is where animation-driven controllers fall apart in practice, not on the happy path, but right at the transitions.

What worked for me was blending the root motion velocity at the code level between states, separate from the animation blend. A 3–5 frame lerp on the velocity value being handed to move_and_slide() hides the discontinuity well enough that it reads as snappy rather than jarring. A bit of extra math, but it covers a surprising number of cases where the raw transition looks wrong regardless of how clean the blend tree looks in isolation.

Moonjump
Forum Search Shader Sandbox
Sign In Register