I've been using Move.ai for a short animated film project for about six months now and figured it was time to share what's actually been working and what still drives me crazy.
For context, I'm a one-person operation running this out of a home studio. The film is roughly 12 minutes and features two humanoid characters with a fair amount of dialogue-adjacent physical performance — gesturing, walking, sitting down, picking things up. Nothing acrobatic, but definitely nuanced stuff where cheap-looking animation would kill the mood.
The good: Move.ai's multi-camera setup with iPhones has been surprisingly good for full-body movement. I'm running four iPhone 12 Pros in a rough diamond pattern covering about a 4x4 meter space. For walking, turning, and most upper-body performance, the output is clean enough that I'm spending maybe 20-30 minutes per clip in cleanup rather than the hour-plus I was burning with my old single-camera MediaPipe pipeline.
The export to Blender via FBX has been mostly painless. I'm retargeting onto a custom rig using Rigify as the base, and the bone orientation from Move.ai plays nicely with that workflow about 80% of the time. The other 20% is usually a hip rotation issue that I've mostly fixed with a corrective Python script at import.
The not-so-good: anything involving hands close to the body — arms crossed, hands in pockets, reaching across the torso — gets confused badly. I've had to rotoscope-keyframe a surprising number of hand and forearm corrections. Also, fast head turns above roughly 45 degrees per second produce tracking artifacts that are annoying to clean up frame by frame.
I'm about 65% through principal capture. Next up is a scene with two characters interacting at close range, which I expect will be the hardest thing I've tackled yet with this setup. Will report back.
Happy to answer questions about the iPhone rig mounting, lighting setup, or the retargeting workflow if anyone's curious.