I've been going back and forth on this for a new project, a 2D action game with potentially 200–400 active entities at once (mix of projectiles, enemies, environmental hazards). Classic broad-phase problem.
I've used quadtrees before and they work fine, but I keep hearing that spatial hashing is faster in practice for dynamic scenes. The argument makes sense on paper: no tree rebalancing, cache-friendlier memory layout, O(1) insert/remove vs O(log n). But "faster in practice" depends heavily on your specific situation.
My current quadtree rebuild-every-frame is sitting around 0.4ms at moderate entity counts in Chrome. Not a crisis, but not free. Swapped in a fixed-cell spatial hash and got it down to ~0.15ms, but only after a lot of fiddling with cell size. Too small and query time explodes. Too large and you're basically brute-forcing.
That cell sizing problem is what makes me nervous about spatial hashing long-term. Quadtrees adapt to entity density automatically. A spatial hash is tuned to an assumed size range, and if that assumption breaks, say, variable-size entities or a boss hitbox that's 10x a projectile, you're re-tuning constantly or maintaining multiple grids.
Things I'm still unsure about:
- For widely varying entity sizes, do you maintain separate hash grids per size tier, or just use the largest entity size as your cell baseline and eat the waste?
- Is hierarchical spatial hashing worth the complexity for a single-player game at this entity count, or is that firmly in "overthinking it" territory?
- Has anyone actually benchmarked these side by side in a real game loop, not a synthetic microbenchmark, and seen a meaningful difference past ~500 entities?
I know the honest answer is "profile your actual game" but I'm curious if anyone's been through this tradeoff and has opinions on which approach ages better as scope grows.