Imagine a crowd of pedestrians parting around a street performer, or enemy soldiers spreading out to flank your position. These movement patterns don't come from hand-crafted animations or rigid scripted paths. They emerge from a handful of mathematical rules called steering behaviors.
First formalized by Craig Reynolds in his 1999 GDC paper "Steering Behaviors for Autonomous Characters," this system remains one of the most practical tools in game AI. It operates on a simple premise: instead of dictating exactly where an agent should be, tell it what it wants to do and let physics handle the rest.
The Three-Layer Model
Reynolds organized autonomous movement into three hierarchical layers that cleanly separate concerns:
- Action Selection: Decides what the agent wants — seek a target, flee a threat, idle randomly.
- Steering: Translates that goal into a force vector pointing the right direction with the right urgency.
- Locomotion: Applies that force to velocity and integrates position each frame.
This separation works because the locomotion layer is universal. Whether an agent is fleeing a dragon or ambling toward a tavern, the same physics update runs: add force to velocity, clamp to max speed, add velocity to position. Only the computed steering force changes, making behaviors fully composable.
The Core Steering Equation
Every steering behavior shares one fundamental equation. Compute the desired velocity $\vec{v}_{desired}$ — the velocity needed to accomplish the goal perfectly right now. The steering force is the difference from current velocity:
$$\vec{F}_{steer} = \vec{v}_{desired} - \vec{v}_{current}$$
Clamp this to a maximum force $F_{max}$, then integrate:
$$\vec{v}_{t+1} = \vec{v}_t + \text{clamp}(\vec{F}_{steer},\; F_{max}) \cdot \Delta t$$
$$\vec{p}_{t+1} = \vec{p}_t + \vec{v}_{t+1} \cdot \Delta t$$
When the agent heads in the wrong direction, $|\vec{v}_{desired} - \vec{v}_{current}|$ is large and correction is strong. When already close to the desired heading, the difference is tiny — a gentle nudge. The result is inertial motion that makes characters feel like they have weight and intention.
function applySteering(agent, steeringForce, dt) {
const f = steeringForce.clone().clampLength(0, agent.maxForce);
agent.velocity.addScaledVector(f, dt); // F = ma, mass = 1
agent.velocity.clampLength(0, agent.maxSpeed);
agent.position.addScaledVector(agent.velocity, dt);
}Seek: The Foundation Behavior
The simplest behavior: the agent desires to move directly toward a target at maximum speed. The desired velocity is a unit vector toward the target, scaled by $v_{max}$:
$$\vec{v}_{desired} = \hat{d} \cdot v_{max} \qquad \hat{d} = \frac{\vec{target} - \vec{position}}{|\vec{target} - \vec{position}|}$$
A pure Seek agent overshoots and orbits the target perpetually, which works for homing projectiles but not for characters who need to stop. That problem is solved by Arrive.
function seek(agent, targetPos) {
const desired = targetPos.clone()
.sub(agent.position)
.normalize()
.multiplyScalar(agent.maxSpeed);
return desired.sub(agent.velocity);
}Flee: The Mirror of Seek
Flee reverses the desired direction — the agent moves away from the threat at full speed. A flee radius $r_{flee}$ ensures the agent only reacts when the threat is close enough to perceive:
$$\vec{v}_{desired} = \frac{\vec{position} - \vec{threat}}{|\vec{position} - \vec{threat}|} \cdot v_{max} \qquad \text{if } |\vec{position} - \vec{threat}| \leq r_{flee}$$
function flee(agent, threatPos, fleeRadius = 20) {
const diff = agent.position.clone().sub(threatPos);
if (diff.length() > fleeRadius) return new THREE.Vector3();
return diff.normalize().multiplyScalar(agent.maxSpeed).sub(agent.velocity);
}Arrive: Graceful Deceleration
Arrive adds a slowing radius $r_{slow}$. Outside it, the agent runs at $v_{max}$. Inside, desired speed scales linearly down to zero:
$$v_{desired}(d) = \begin{cases} v_{max} & d \geq r_{slow} \\[4pt] \displaystyle v_{max} \cdot \frac{d}{r_{slow}} & d < r_{slow} \end{cases}$$
At the exact target, desired speed is zero — the agent brakes to a perfect stop. Tune $r_{slow}$ to taste: small values create a sharp stop, large values a sweeping deceleration curve.
function arrive(agent, targetPos, slowRadius = 10) {
const desired = targetPos.clone().sub(agent.position);
const dist = desired.length();
if (dist < 0.01) return new THREE.Vector3();
const speed = dist < slowRadius
? agent.maxSpeed * (dist / slowRadius)
: agent.maxSpeed;
return desired.normalize().multiplyScalar(speed).sub(agent.velocity);
}Wander: Organic Random Motion
Picking a new random direction every frame produces incoherent jitter. Wander fixes this by placing an imaginary circle of radius $r_w$ at distance $d_w$ ahead of the agent, and walking a point around it by a small random angle $j$ each frame.
Circle center projected along current heading:
$$\vec{c} = \vec{position} + \hat{v} \cdot d_w$$
Wander target on the circle:
$$\vec{w} = \vec{c} + r_w \begin{pmatrix} \cos\theta \\ 0 \\ \sin\theta \end{pmatrix}, \qquad \theta_{t+1} = \theta_t + \text{rand}(-j,\; j)$$
Because $\theta$ changes gradually, motion is smooth curves rather than random snapping — the agent drifts with apparent internal motivation, perfect for idle NPCs or ambient wildlife.
function wander(agent) {
const wanderRadius = 3, wanderDist = 5, jitter = 0.4;
agent.wanderAngle += (Math.random() - 0.5) * jitter;
const fwd = agent.velocity.clone().normalize();
if (fwd.lengthSq() < 0.01) fwd.set(0, 0, 1);
const circleCenter = fwd.multiplyScalar(wanderDist);
circleCenter.x += Math.cos(agent.wanderAngle) * wanderRadius;
circleCenter.z += Math.sin(agent.wanderAngle) * wanderRadius;
return circleCenter;
}Obstacle Avoidance
Avoidance projects a detection cylinder ahead of the agent. When it intersects an obstacle, a lateral force steers the agent around it — stronger for imminent collisions, weaker for distant ones:
$$F_{avoid} \propto \frac{1}{d_{ahead}} \cdot \hat{n}_{lateral}$$
where $\hat{n}_{lateral}$ is the unit vector perpendicular to forward, pointing away from the obstacle. In practice, a fan of three to five forward rays gives solid results for most game scenarios without expensive geometry queries.
Combining Behaviors: Weighted Blending
Steering behaviors are most powerful when combined. Each behavior produces a force vector; the final steering force is a weighted sum:
$$\vec{F}_{total} = \sum_i w_i \vec{F}_i$$
function computeSteering(agent, player, obstacles) {
return new THREE.Vector3()
.add(seek(agent, player.position).multiplyScalar(0.6))
.add(wander(agent).multiplyScalar(0.15))
.add(avoidObstacles(agent, obstacles).multiplyScalar(1.2))
.clampLength(0, agent.maxForce);
}Weighted blending has one failure mode: opposing forces can cancel each other, sending an agent straight into a wall it's nominally avoiding. Priority-based truncation fixes this by allocating a "steering budget" to behaviors in order of urgency:
function prioritySteering(agent, behaviors) {
const result = new THREE.Vector3();
let budget = agent.maxForce;
for (const behavior of behaviors) {
if (budget <= 0.001) break;
const force = behavior(agent);
const mag = force.length();
if (mag <= budget) {
result.add(force);
budget -= mag;
} else {
result.add(force.normalize().multiplyScalar(budget));
break;
}
}
return result;
}Arrange behaviors in urgency order — obstacle avoidance first, separation second, seek/flee third, wander last — and the agent always prioritizes self-preservation over higher-level goals.
Separation and Cohesion for Groups
Two more behaviors handle crowd dynamics:
- Separation: Each agent steers away from nearby neighbors to avoid crowding. Repulsion is stronger for closer agents: $\vec{F}_{sep} = \sum_{i} \hat{d}_i / |d_i|$ where $\hat{d}_i$ points away from neighbor $i$.
- Cohesion: Each agent steers toward the average position of its local group. Combined with separation and velocity alignment, this produces emergent flocking — the foundation of the Boids algorithm covered in a separate article.
Real-World Game Applications
Steering behaviors ship in virtually every major game with NPC movement:
- GTA series: Pedestrians idle with wander, flee when startled by gunshots (flee radius triggered by sound), and seek destinations on their daily routines. Traffic AI uses arrive to stop at lights.
- Halo (Bungie): Covenant enemies combine seek, flee, and obstacle avoidance. The famous "smart" feel of Elites flanking and taking cover emerges directly from these weighted behaviors interacting with the environment geometry.
- The Sims (Maxis): Sims use arrive to approach furniture and activities, separation to avoid overlapping, and flee-like behaviors to escape fire or overflowing toilets.
- Pac-Man: Each ghost targets a different map tile using seek variants. Blinky targets Pac-Man's current tile (pure seek). Pinky targets four tiles ahead of Pac-Man (predictive seek). Clyde switches between seek and flee based on proximity. All of them are just vector arithmetic evaluated once per frame.
- StarCraft II: Unit movement uses arrive for stop commands, separation to prevent stacking, and obstacle avoidance around terrain. Those units that appear to "think" about approaching an enemy base are running these equations thousands of times per second.
Performance Considerations
A single steering force computation costs a handful of vector operations — negligible for up to a few hundred agents. The bottleneck for large crowds is neighbor queries: finding which agents are nearby for separation and cohesion. Brute-force $O(n^2)$ comparison grinds to a halt beyond a few hundred agents. The solutions are the same as for other spatial problems — Spatial Hashing or Quadtrees reduce neighbor lookups to near-$O(1)$. Unity's DOTS system processes millions of steering agents in parallel across CPU cores using SIMD instructions.
For most games — a handful of enemies, a crowd of twenty pedestrians — no optimization is needed. Implement naively, iterate on feel, profile later.
Getting Started
The interactive demo above shows all four core behaviors live. To add steering to your own project:
- Give each agent a
position,velocity,maxSpeed, andmaxForce. - Each frame, compute a steering force based on current game state.
- Add force to velocity (clamped to
maxForce * dt), clamp velocity tomaxSpeed, add velocity to position. - Set visual rotation from velocity direction:
angle = atan2(vel.x, vel.z).
Start with Seek and Arrive. Two behaviors carry you further than you might expect. Add Wander for idle characters, Flee for NPCs reacting to threats, and Separation to prevent agents from stacking. Tune weights and radii until the movement feels right. That tuning process is the craft.
Comments