six months using Move.ai for a short film, here's where i'm at

37 views 3 replies

I've been using Move.ai for a short animated film project for about six months now and figured it was time to share what's actually been working and what still drives me crazy.

For context, I'm a one-person operation running this out of a home studio. The film is roughly 12 minutes and features two humanoid characters with a fair amount of dialogue-adjacent physical performance — gesturing, walking, sitting down, picking things up. Nothing acrobatic, but definitely nuanced stuff where cheap-looking animation would kill the mood.

The good: Move.ai's multi-camera setup with iPhones has been surprisingly good for full-body movement. I'm running four iPhone 12 Pros in a rough diamond pattern covering about a 4x4 meter space. For walking, turning, and most upper-body performance, the output is clean enough that I'm spending maybe 20-30 minutes per clip in cleanup rather than the hour-plus I was burning with my old single-camera MediaPipe pipeline.

The export to Blender via FBX has been mostly painless. I'm retargeting onto a custom rig using Rigify as the base, and the bone orientation from Move.ai plays nicely with that workflow about 80% of the time. The other 20% is usually a hip rotation issue that I've mostly fixed with a corrective Python script at import.

The not-so-good: anything involving hands close to the body — arms crossed, hands in pockets, reaching across the torso — gets confused badly. I've had to rotoscope-keyframe a surprising number of hand and forearm corrections. Also, fast head turns above roughly 45 degrees per second produce tracking artifacts that are annoying to clean up frame by frame.

I'm about 65% through principal capture. Next up is a scene with two characters interacting at close range, which I expect will be the hardest thing I've tackled yet with this setup. Will report back.

Happy to answer questions about the iPhone rig mounting, lighting setup, or the retargeting workflow if anyone's curious.

six months is a solid sample size. did you notice any difference in quality between fast-action shots versus slow conversational ones? my experience with markerless systems is they tend to hold up better in the slower moments, fast limb movement is where the model's temporal assumptions start to degrade.

also curious whether you ran any post-process cleanup in MotionBuilder or just took the exported animation straight into your pipeline. mocap cleanup animation comparison

The fast head turn artifact you're seeing — have you tried adjusting the smoothing and prediction sliders in the Move.ai post-process settings before export? Not all of them are exposed in the UI but if you dig into the session config JSON there are a couple of undocumented fields for rotation filtering that helped me on similar clips. Head and neck channels seem to be the least stable part of their solver at speed.

six months is a solid evaluation period. the two-character interaction problem is where i've seen Move.ai struggle most — when actors are within roughly an arm's length the depth ambiguity causes limb-swapping errors where a forearm gets attributed to the wrong person for a few frames. keeping actors slightly more separated than feels natural during the take helped me reduce it but didn't eliminate it. curious how your upcoming close-range scene goes.

motion capture two actors interacting