spatial hashing vs quadtrees for 2D broad-phase collision — which actually wins at scale?

129 views 9 replies

I've been going back and forth on this for a new project, a 2D action game with potentially 200–400 active entities at once (mix of projectiles, enemies, environmental hazards). Classic broad-phase problem.

I've used quadtrees before and they work fine, but I keep hearing that spatial hashing is faster in practice for dynamic scenes. The argument makes sense on paper: no tree rebalancing, cache-friendlier memory layout, O(1) insert/remove vs O(log n). But "faster in practice" depends heavily on your specific situation.

My current quadtree rebuild-every-frame is sitting around 0.4ms at moderate entity counts in Chrome. Not a crisis, but not free. Swapped in a fixed-cell spatial hash and got it down to ~0.15ms, but only after a lot of fiddling with cell size. Too small and query time explodes. Too large and you're basically brute-forcing.

That cell sizing problem is what makes me nervous about spatial hashing long-term. Quadtrees adapt to entity density automatically. A spatial hash is tuned to an assumed size range, and if that assumption breaks, say, variable-size entities or a boss hitbox that's 10x a projectile, you're re-tuning constantly or maintaining multiple grids.

Things I'm still unsure about:

  • For widely varying entity sizes, do you maintain separate hash grids per size tier, or just use the largest entity size as your cell baseline and eat the waste?
  • Is hierarchical spatial hashing worth the complexity for a single-player game at this entity count, or is that firmly in "overthinking it" territory?
  • Has anyone actually benchmarked these side by side in a real game loop, not a synthetic microbenchmark, and seen a meaningful difference past ~500 entities?

I know the honest answer is "profile your actual game" but I'm curious if anyone's been through this tradeoff and has opinions on which approach ages better as scope grows.

Replying to EchoWolf: I stopped trying to calculate the right cell size upfront and just profiled it a...

Runtime profiling is the right call. One thing I'd add: optimal cell size can shift a lot depending on game state. Sparse early encounters and dense chokepoints are just different workloads. I ended up doing a slow rolling check every few seconds and snapping between three preset cell sizes based on measured entity distribution. Not something I'd recommend for every project, but if your entity density varies a lot across phases it pays off. Got about a 15% improvement in the densest scenario versus a fixed size tuned for average conditions.

Replying to IronBloom: Runtime profiling is the right call. One thing I'd add: optimal cell size can sh...

Do you actually switch cell sizes at runtime, or profile during development and pick a fixed one? Genuinely curious, because switching mid-session means rebuilding the entire hash structure, and that rebuild is most expensive exactly when you least want a hitch (dense chokepoint, lots of occupied cells).

I ended up sidestepping the tuning problem differently: two parallel grids at different granularities. Projectiles go in the fine grid, large enemies in the coarse one. Query only the grid appropriate for each entity type. Probably more memory than necessary, but it killed the "what's the right cell size" question entirely and both grids stay fast for their specific workloads.

One thing that pure benchmark comparisons often miss: cache behavior under non-uniform spatial distributions. If your entities cluster, enemies grouping around the player, projectiles coming from one direction, crowds bottlenecking through a chokepoint, a fixed-cell-size spatial hash can end up with a huge number of collision checks concentrated in a small handful of cells while most of the grid is empty. The average-case numbers look great but the worst-case spikes are what you actually feel.

Quadtrees handle skewed distributions more gracefully because they subdivide based on actual density. For a game where entity distribution is relatively uniform and predictable, spatial hashing probably wins on raw throughput. For anything with meaningful clustering patterns, I'd benchmark against a distribution that actually matches your typical scenario before committing.

Replying to StealthLattice: One thing that pure benchmark comparisons often miss: cache behavior under non-u...

the clustering mitigation i use: don't iterate the whole grid. keep a separate set of which cells are actually occupied this frame and only touch those. when entities cluster you skip empty cells entirely and cache pressure drops fast. the overhead of maintaining the occupied-cell set is basically nothing compared to what you save when half your entities are stacked in one corner of the map.

for really tight clustering you can also bias your cell size slightly larger than optimal for the average-case and accept a few extra narrow-phase checks. the improved cache locality on the populated cells tends to more than pay for it.

Most of the benchmarks in this thread describe bounded arenas with reasonably uniform entity density, and spatial hashing wins there pretty reliably. That changes in open-world or large-map scenarios where density varies a lot by region. A quadtree's adaptive subdivision means you're not paying for empty space; spatial hashing with a fixed cell size pays the same cost whether a cell has 40 entities or zero.

I tested this on a 2D survival prototype across a 4096×4096 map. Active entity density near the player's base was roughly 15–20× the density in outer zones. Spatial hash won in populated areas, but the quadtree won overall because it wasn't touching thousands of empty cells during the broad phase. Rough crossover point: once the high-density play area covers less than about 10–15% of the total map, I'd reach for the quadtree first.

Benchmarked both for a similar project, roughly 300 mixed entities in a 2D arena. Spatial hashing won pretty clearly once I found the right cell size, but that tuning step is the real gotcha nobody mentions. Too small and you're iterating over too many empty buckets; too large and you get degenerate cases where everything in a dense cluster ends up in the same few cells. The sweet spot in my case was about 1.5–2x the average entity collision radius, but I arrived at that purely by profiling. I don't have a principled formula for it.

Quadtrees felt more theoretically correct because they adapt to distribution automatically, but the pointer chasing and rebalancing overhead showed up clearly in the profiler even with a reasonable node budget. For a fixed arena with roughly uniform entity sizes, spatial hashing is almost certainly the better call. If your world is huge with wildly uneven entity clustering, quadtrees might reclaim that advantage, but that wasn't my situation.

Replying to CyberPike: how did you actually land on the right cell size? that's the part that always tr...

I stopped trying to calculate the right cell size upfront and just profiled it at runtime. Same stress scenario, three cell sizes, 1x, 2x, 3x median entity radius, timed the broad phase each time. The sweet spot was obvious in about ten minutes. 2x median held in my case too, but when I had a real mix of tiny projectiles and large enemies I ended up running two separate grids rather than hunting for a single size that worked for both. Trying to fit wildly different entity scales into one grid is where the math stops cooperating and you're just guessing anyway.

Replying to RiftFox: Do you actually switch cell sizes at runtime, or profile during development and ...

I profile during development and ship with a fixed size. Rebuilding the entire hash structure mid-session, especially during a dense combat scenario when you actually need the broad phase working well, is almost always a worse tradeoff than whatever suboptimality you were trying to correct. My process: pick two or three representative scenarios (sparse exploration, peak entity density, worst-case encounter), try a handful of cell sizes, and commit to whatever performs best under the worst case. In practice the gap between a good cell size and a theoretically perfect one is maybe 5–10% at scale, and that doesn't justify the rebuild overhead or the extra code path to manage.

Replying to EchoStone: Benchmarked both for a similar project, roughly 300 mixed entities in a 2D arena...

how did you actually land on the right cell size? that's the part that always trips me up. in theory it should be somewhere around 2x your largest entity's bounding radius, but when entity sizes vary a lot (tiny projectiles vs large enemies) any single value feels like a compromise. did you just benchmark a range and pick the winner, or is there a more principled way to approach it?

Moonjump
Forum Search Shader Sandbox
Sign In Register