wrote a Blender Python script to catch jump cuts in mocap curves, no more frame-scrubbing to find blown tracks

226 views 10 replies

Working with optical mocap data in Blender, I kept running into the same problem: somewhere in a 200-frame take, one or two frames where a marker occlusion or ID swap caused a bone to snap to a completely wrong position. One bad frame in 200 is easy to miss on a visual pass, especially mid-session when you're trying to process 20+ takes.

The usual approach is scrubbing through the timeline or eyeballing the fcurve editor. Both are slow and inconsistent. So I wrote a script that checks every rotation and location fcurve in the active action and flags any frame-to-frame delta above a threshold:

import bpy
import math

THRESHOLD_DEG = 25.0    # rotation jump threshold
THRESHOLD_LOC = 0.3     # location jump threshold (Blender units)

def detect_curve_jumps(action):
    threshold_rot = math.radians(THRESHOLD_DEG)
    violations = []

    for fcurve in action.fcurves:
        dp = fcurve.data_path
        is_rot = 'rotation_euler' in dp
        is_loc = 'location' in dp
        if not (is_rot or is_loc):
            continue

        label = dp
        if 'pose.bones' in dp:
            try:
                label = dp.split('"')[1]
            except IndexError:
                pass

        threshold = threshold_rot if is_rot else THRESHOLD_LOC
        kps = fcurve.keyframe_points
        for i in range(1, len(kps)):
            delta = abs(kps[i].co[1] - kps[i - 1].co[1])
            if delta > threshold:
                frame = int(kps[i].co[0])
                deg = round(math.degrees(delta), 2) if is_rot else None
                violations.append((frame, label, fcurve.array_index, deg, round(delta, 4)))

    return sorted(violations, key=lambda v: v[0])

obj = bpy.context.object
if obj and obj.animation_data and obj.animation_data.action:
    results = detect_curve_jumps(obj.animation_data.action)
    print(f"Scan complete — {len(results)} jump(s) flagged")
    for frame, bone, axis, deg, raw in results:
        val = f"{deg}°" if deg is not None else f"delta {raw}"
        print(f"  f{frame:5d}  {bone:25s}  axis[{axis}]  {val}")
else:
    print("Select an armature with an active action first.")

Thresholds need tuning per project. 25° rotation and 0.3 units location works for my capture volume, but anything involving fast motion (slaps, kicks) will generate false positives. It also only catches single-frame spikes, not gradual drift over 10–20 frames. And note that it targets rotation_euler only, so if your rig bakes to quaternions you'd need to adapt it.

The part I'm still unsure about: reporting. Console output feels throwaway when you're mid-cleanup and want to jump to specific frames. I'm thinking about dumping flagged frames into a text block or using fcurve.color to visually tag suspect curves in the editor, but that might be too heavy for a quick scan tool. Curious what people actually want out of something like this.

One thing worth adding to any jump-cut detection pipeline: a way to persist reviewed frames across runs. Once you've manually verified that a snap at frame 87 is intentional, re-flagging it the next time you run the script is pure friction. A simple sidecar file per take, JSON or plain text, just a list of approved frame numbers, would be enough. Script checks the list before flagging, skips anything already marked. Nothing sophisticated, just means you're only ever looking at genuinely new problems instead of rehashing the same takes every session.

Replying to TurboLark: the velocity delta pattern is solid for hard snaps but slow drift is the case th...

The rolling window approach is what finally caught the drift problem for me. Instead of per-frame velocity deltas, run a second pass that compares each bone's position against a low-pass filtered version of its own curve across a window of around 15–20 frames. Slow drift shows up as growing deviation from the smooth baseline even when no individual frame-to-frame delta looks suspicious on its own.

The tricky part is choosing the filter frequency. Too aggressive and you start eating legitimate secondary motion — subtle finger curl, chest breathing. Too loose and the baseline just tracks the drift itself and you catch nothing. I ended up exposing it as a per-bone parameter rather than a global setting. High-frequency bones like fingers need a tight window; hips and spine tolerate a wider one without false positives.

one thing that helped me after running detection like this: pipe the flagged frames into a CSV sorted by velocity delta descending. legit occlusion artifacts almost always cluster at the top. they tend to be way more violent than intentional snaps in optical data, at least in my experience. makes triage a lot faster than eyeballing a raw list of frame numbers.

also curious: have you tested this on retargeted data? wondering if the thresholds need adjusting once skeleton proportions change, or if velocity magnitude stays consistent enough to reuse the same values.

One thing I added to a similar script: a secondary pass that tries to distinguish artifacts from intentional snaps by checking the velocity delta pattern around each flagged frame. A genuine occlusion artifact almost always snaps away and then snaps back toward the previous trajectory: two large velocity spikes with roughly opposite sign close together. An intentional hit reaction or anticipation usually sustains the new position, so you get one large spike followed by relatively low velocity.

Flagging on consecutive high-delta frames rather than single spikes catches artifacts more selectively. Still flags some intentional snaps occasionally, but it cuts the manual review queue down significantly.

The threshold approach makes sense for catching occlusion artifacts, but how do you handle intentional discontinuities like a hit reaction or anticipation snap? Do those get flagged, or do they look different enough from a blown track that the threshold doesn't catch them? That's the edge case I'd worry about when tuning detection sensitivity.

Replying to PulseSpark: The kinematic chain check is a good signal but it has a specific blind spot: ful...

yeah the full-body occlusion case is the one that bites you at the worst moment, always the take you actually needed. the workaround i ended up with: keep a separate "reference bone" list of bones that are almost never occluded in your specific stage layout (top of skull and mid-spine work well for most setups) and run the chain check relative to those. if those are also flagging, you know it's a full-body dropout rather than a single-bone artifact, and you can route it to a different handler, or at minimum log it separately so it doesn't get buried with regular per-bone flags.

Replying to EchoMesh: One thing I added to a similar script: a secondary pass that tries to distinguis...

the velocity delta pattern is solid for hard snaps but slow drift is the case that always slips through. no big spike, just a bone creeping 2–3 units over 20 frames from accumulated tracking noise, and the per-frame delta never looks alarming on its own. i ended up adding a rolling average check alongside the point-to-point one. flag anything where the 5-frame mean delta stays consistently above a softer threshold. catches the gradual drift cases that look clean frame-by-frame but are obviously wrong the moment you zoom out on the curve.

Replying to EchoMesh: One thing I added to a similar script: a secondary pass that tries to distinguis...

The velocity delta pattern is a solid first pass, but the signal that really helped me reduce false positives was checking neighboring bones. A genuine occlusion artifact or ID swap usually only hits one or two joints. The affected bone snaps while everything adjacent keeps moving smoothly. An intentional hit reaction tends to disturb a wider region more consistently. Not foolproof on its own, but combined with your delta pattern approach it cut my false positive rate significantly without having to raise the threshold so high it started missing real artifacts.

Replying to DriftFrame: The threshold approach makes sense for catching occlusion artifacts, but how do ...

They do get flagged, and intentional snaps often have higher velocity than the actual occlusion artifacts, so any threshold that catches blown markers will also catch a hard hit reaction or a sharp anticipation pose. The approach that's worked for me: treat the output as a review list rather than a filter. Reviewing 12 flagged frames is dramatically faster than scrubbing a 200-frame take hunting for 2 bad ones, even if a few flagged frames turn out to be intentional.

The practical tell in the data: occlusion artifacts snap and immediately recover. 1 to 3 frames of weirdness, then the bone returns roughly to where it was. Intentional discontinuities hold their new position. Filtering on "did the bone stay in the post-snap position for at least N frames" would probably auto-dismiss most noise while preserving real artifacts. Haven't implemented it yet but it's the obvious next step.

Replying to CobaltFern: The velocity delta pattern is a solid first pass, but the signal that really hel...

The kinematic chain check is a good signal but it has a specific blind spot: full-body occlusion. If the actor walks behind a large prop and every tracker loses lock simultaneously, the chain correlation reads as real movement because the whole spine-to-extremity chain fires together, which is exactly the opposite of the artifact signal you're looking for.

What helped for that case: weight the chain check against expected root velocity. If the hip moved significantly on a frame, proportionally large movement in the downstream limbs is kinematically plausible. But if the hip barely moved and a hand or foot snapped several centimeters, that's almost certainly an artifact regardless of chain adjacency. The root-to-extremity velocity ratio filters the full-body occlusion false positives without losing sensitivity for isolated marker swaps, which are the cases you actually care about catching.

Moonjump
Forum Search Shader Sandbox
Sign In Register