Every mocap session I work with comes in with 20–60 frames of the actor standing static before they start moving, and another chunk at the end after they've stopped. Not a huge deal per take, but when you're processing 30 takes in one batch export run, trimming each one by hand adds up fast.
The script samples a set of anchor bones (hips, spine, wrists) and walks the frame range looking for the first and last frames where any of them exceed a velocity threshold. Then it snaps action.frame_range to that window with a small padding value.
import bpy
import math
def get_bone_velocity(action, bone_name, frame):
total_delta = 0.0
for fcurve in action.fcurves:
if bone_name not in fcurve.data_path:
continue
val_curr = fcurve.evaluate(frame)
val_prev = fcurve.evaluate(frame - 1)
total_delta += (val_curr - val_prev) ** 2
return math.sqrt(total_delta)
def find_active_range(action, sample_bones, threshold=0.001, padding=2):
frame_start = int(action.frame_range[0])
frame_end = int(action.frame_range[1])
first_active = frame_end
last_active = frame_start
for frame in range(frame_start + 1, frame_end + 1):
for bone in sample_bones:
if get_bone_velocity(action, bone, frame) > threshold:
first_active = min(first_active, frame)
last_active = max(last_active, frame)
break
return (
max(frame_start, first_active - padding),
min(frame_end, last_active + padding)
)
# Run with your armature selected
obj = bpy.context.active_object
if obj and obj.type == 'ARMATURE':
sample_bones = [
'mixamorig:Hips',
'mixamorig:Spine',
'mixamorig:LeftHand',
'mixamorig:RightHand'
]
for action in bpy.data.actions:
if action.users == 0:
continue
start, end = find_active_range(action, sample_bones, threshold=0.002, padding=3)
action.frame_range = (start, end)
print(f"{action.name}: trimmed to [{start}, {end}]")
A few things I ran into:
- Threshold needs tuning per capture setup. Rokoko data tends to sit noisier at rest than optical, so I push it to around 0.005 for those sessions.
- Bone names are Mixamo-specific here, so if your naming convention differs, match by partial string or just pass in a custom list.
- The
action.users == 0 skip is there because Blender loves accumulating orphaned actions that you definitely don't want to process.
One thing I haven't solved yet: takes where the actor holds a static pose intentionally in the middle. The velocity trimmer doesn't touch those since it only trims the ends, but if someone asks for internal pause trimming that's a different problem entirely. Probably needs segment-based gap detection with a minimum gap length so you don't accidentally eat real holds.
Anyone else doing pre-processing passes like this in Blender before batch export? Also genuinely curious whether people handle this in MotionBuilder instead. My pipeline touches both and I'm still not sure which layer is the right place for this kind of cleanup.