Been going down a rabbit hole on motion matching after watching some GDC talks and digging through the UE5 motion matching docs. The locomotion results look incredible. No more weird blend tree tuning, no more fighting with foot sliding at transition edges, just smooth responsive movement that actually looks like the mocap intended.
But the more I dig in, the more it feels like it's built around assumptions I can't meet. The database size recommendations I keep seeing are 10–30+ minutes of motion data minimum for a convincing locomotion set. That's a lot of capture time, a lot of cleanup, and a lot of storage. UE5's implementation is reasonably accessible, but it's Epic's tooling in Epic's engine. For Godot or Unity projects you're either rolling your own pose matching system or porting something, which is... a lot.
I tried prototyping a stripped-down version: nearest-neighbor pose matching against a small database (about 4 minutes of walk/run/stop/start data) and the results were fine? Better than I'd expect from a naive blend tree, but nowhere near the "it just works" smoothness the demos promise. My guess is I'm just data-starved.
The thing that keeps nagging at me: is the quality improvement from motion matching as a technique, or mostly from the sheer volume and quality of the underlying data? Because if it's the latter, a well-designed blend tree with good mocap probably closes most of the gap at indie scale. And the pipeline cost of motion matching might not be worth it unless you're already sitting on a huge motion library.
Anyone actually shipped something using motion matching outside of Unreal? What did your database end up looking like, and do you think it was worth the investment over a traditional approach?