prop tracking with inertial suits — is anyone actually getting clean data or are we all just keyframing on top?

84 views 10 replies

Been trying to get prop tracking working properly with my Rokoko suit and I'm starting to think the whole pipeline is just fundamentally broken for anything requiring precision.

The setup: Rokoko suit + two extra rigid body trackers (Smartsuit Pro pucks), one on a sword prop, one on a shield. In Rokoko Studio the preview looks okay. Export to FBX, bring it into Blender, and that's where things fall apart. The wrist rotation from the suit body data and the sword tracker are almost never in agreement. There's a consistent angular offset that shifts over time as the inertial drift diverges between the two sensors.

The trackers are literally attached to the same prop the actor is holding. They should agree. They don't.

Things I've tried:

  • Recalibrating the prop tracker relative to the wrist at the start of each take — helps slightly, drifts after ~30 seconds of continuous movement
  • A constraint in Blender that blends between the tracker data and a copy-rotation from the wrist with a manual offset baked in — works for slow takes, completely breaks on anything with a swing
  • Giving up and keyframing the prop attachment by hand over the cleaned-up suit data

That last option is honestly where I land every session. Which makes me wonder what the prop tracker is even for beyond very slow, deliberate movement.

Is anyone actually shipping prop-tracked inertial mocap data clean enough to use directly? Or is manual cleanup and keyframe correction just the accepted reality? I've never had access to an optical setup (Vicon, OptiTrack) so I don't know if that changes things much. Would be curious to hear from anyone who has.

Replying to DriftRay: The setup I've seen actually work for precision prop tracking: go hybrid, but di...

The Vive Tracker hybrid is the only approach I've seen actually hold up for precision prop work. Optical tracking resets accumulated drift before it compounds. It's a fundamentally different error model than stacking IMU corrections, and that difference really shows up on fast motions.

What tripped me up was clock synchronization. SteamVR and my mocap software ran on different internal clocks, and even with NTP sync the jitter was visible as subtle sliding on fast prop motions. I ended up adding a manual time-offset parameter I could dial in per session using a reference motion as an eyeball anchor. Not elegant, but workable.

Did you find base station count made much difference for fast prop movement, or were two stations sufficient to avoid tracking loss through hand occlusion?

The setup I've seen actually work for precision prop tracking: go hybrid, but differently from most people try. Instead of adding another IMU to the prop, mount a Vive Tracker on it and run SteamVR alongside the suit in the same capture session. Lighthouse positional data has a completely different error profile from IMU drift. Absolute position rather than integrated velocity, so the prop doesn't accumulate wander over time.

For props where world-space position actually matters (swords, firearms, anything that has to relate spatially to another tracked object), the quality difference is significant. The overhead is real: you need lighthouses in the volume and SteamVR running alongside your capture software. The ongoing pain is timestamp alignment in post; Vive Tracker exports and suit data rarely share a common clock without deliberate sync markers baked in during capture.

Still beats inferring a sword tip position through FK from a wrist-mounted IMU. That result is only as accurate as the cumulative drift on every joint above the hand.

gave up on imu prop tracking after two sessions and went hybrid. suit handles body and hands, then i do rough keyframe blocking for the prop using the body animation as a reference layer. way more controllable than spending hours cleaning noisy data that still looks wrong anyway. the only time inertial prop data is actually usable imo is two-handed weapons, since both hand orientations constrain it enough that drift becomes manageable. one-handed props at arm's length? just keyframe it and save yourself the pain.

sisyphus boulder uphill struggle
Replying to ShadowMesh: gave up on imu prop tracking after two sessions and went hybrid. suit handles bo...

same conclusion here, arrived there faster than i'd like to admit. the body animation as a reference layer is genuinely what makes keyframing the prop not miserable, you've got a wrist position to anchor against instead of working completely blind.

one thing that helped a lot: parent the prop to a locator on the wrist bone for the rough blocking pass, bake it, then break the constraint before cleanup. gives you a free first pass that already roughly tracks the hand motion, and you're only correcting the residual error instead of building every keyframe from scratch.

slow disappointed nod yeah I figured

One thing I haven't seen mentioned: the prop geometry itself heavily constrains which approach is even viable. Flat rigid props like a gun, clipboard, or tablet are ideal for fiducial markers or optical tracking because you have a stable surface to mount things on and the pose is unambiguous from most angles. Cylindrical or round props (a bottle, a staff, a ball) are fundamentally harder because any rotation that doesn't change the visible silhouette is invisible to a camera.

The Vive Tracker approach handles this better than optical markers since it doesn't care about visual geometry, but for cylindrical props I've found you need two trackers at different offsets to get unambiguous roll data, and at that point the physical setup starts affecting how the actor can naturally hold and manipulate the thing, which defeats the purpose.

IMUs at least don't have the geometry ambiguity problem, which is why I haven't fully written them off for round props. The failure mode is drift, not lost tracking, and drift is at least somewhat manageable with short takes and strategic resets.

Two projects of trying to make inertial prop tracking work reliably and my conclusion is pretty much the same: the precision requirements for most prop use cases just don't match how IMU drift accumulates over a take. You can throw smoothing filters, constraint rigs, and recalibration intervals at it, but you end up spending budget maintaining the appearance of clean data rather than actually having clean data.

The hybrid approach that keeps coming up here isn't really a workaround. It's an honest read of what inertial suits are good at. Body motion across a long take: great. Stable absolute world-space position for a smaller rigid prop: not what the hardware does. Accepting that early and budgeting keyframe time for props is just practical. Fighting the hardware costs more than designing around it.

Replying to PulseCaster: Different angle that worked well on a recent project: I gave up on suit prop tra...

the ceiling mount is clever for most floor-level action but what happens when the actor's body occludes the tag, like reaching overhead toward the camera, crouching to pick something off the floor, or going prone? did you need multiple camera positions to cover those angles, or did the choreography just avoid them?

asking because i've been eyeing a similar setup but my shoot involves a lot of low-to-the-ground action and i'm not sure one ceiling cam is enough coverage.

Replying to ShadowMesh: gave up on imu prop tracking after two sessions and went hybrid. suit handles bo...
The reference layer approach clicked for me once I started exporting the wrist bone as a guide empty before blocking the prop. Bake it to a plain axes object, hide the suit rig, then animate the prop against actual wrist position data instead of eyeballing from memory. Way less second-guessing whether the prop is lagging the hand, since you can literally see where the grip point is on every frame.
Replying to ShadowMesh: gave up on imu prop tracking after two sessions and went hybrid. suit handles bo...

the reference layer approach is underrated for this. one thing that really helped me with the hybrid workflow: drop a visible marker in the suit data at some big distinctive motion — like a wide arm swing or a foot stomp — and use that as your sync anchor when you're lining up the keyframe layer on top. way more reliable than trusting timecodes to stay aligned across a long take, especially if you're cutting between different recording sessions.

Different angle that worked well on a recent project: I gave up on suit prop tracking entirely and instead mounted a printed AprilTag marker on the prop, then tracked it with a ceiling-mounted webcam using OpenCV. Wrote a small Python script to pipe the transform matrices into Blender as keyframes.

Yes, it's a 2D projection unless you add a second camera, but for props that mostly operate at table height or in a consistent horizontal plane the accuracy was surprisingly good. Zero drift by definition — each frame is independently solved from the image. Setup took an afternoon including lens calibration.

The obvious weakness is occlusion. Hand over the marker and you lose tracking. But for phones, tools, weapons — anything with a flat surface you can tag — it held up far better than IMU for precise contact moments. Worth a look if your prop work stays in a predictable plane.

Moonjump
Forum Search Shader Sandbox
Sign In Register