Been banging my head against this for a couple weeks and figured I'd throw it out here. I'm building a top-down action game where the ground can be cratered by explosions, think soft-body craters that persist through the level. Not full voxel, just a heightmap-based terrain mesh that I'm deforming at runtime.
The naive approach was exactly what you'd expect: explosion hits, I iterate over nearby vertices, displace them along a smoothstep falloff curve, then call mesh.RecalculateNormals(). Works fine for one or two hits. At around 15-20 deformations in a scene the frame time tanked hard. Profiler showed RecalculateNormals eating 4-6ms on its own every time it ran.
So I moved normal recalculation to a Compute shader, huge improvement, down to sub-millisecond. But now I'm seeing a different spike: the vertex buffer readback. I'm writing deformation data on the GPU, but then I need CPU-side vertex positions for things like gameplay queries ("is this tile walkable?", raycasts against the deformed surface). That round-trip is brutal.
Current thinking is to keep two representations: a low-res CPU heightmap that I deform cheaply on the main thread (just a 2D float array, no mesh involved), and a high-res GPU mesh that purely drives visuals. Gameplay queries go against the heightmap, rendering goes against the GPU mesh. Sync them lazily, heightmap updates immediately on hit, mesh update gets queued for the next frame or two.
Has anyone actually shipped something like this? I'm worried about edge cases where the visual and gameplay representations drift in noticeable ways, like a player walking into a crater that visually exists but the heightmap says is still flat. Is the answer just to make the heightmap resolution high enough that the discrepancy isn't perceptible, or is there a smarter sync strategy?
Also curious whether anyone's gone the full voxel route for something like this in an indie context and whether the complexity overhead was worth it. My gut says heightmap is fine for a top-down game but I could be wrong.
