wrote a generic object pool for Godot 4, using process_mode as the free flag. is this weird?

188 views 8 replies

Working on a bullet hell-adjacent game and at around 150+ projectiles onscreen, the instantiate/free overhead started showing up in the profiler even in Godot 4. Wrote a pool class to deal with it, but my approach to tracking which instances are "in use" feels slightly off and I want a gut check.

Instead of a separate is_active boolean or a custom interface, I'm using process_mode as the free indicator:

class_name ObjectPool
extends RefCounted

var _pool: Array[Node] = []
var _scene: PackedScene
var _parent: Node

func _init(scene: PackedScene, parent: Node, prewarm: int = 10) -> void:
    _scene = scene
    _parent = parent
    for i in prewarm:
        var obj := _scene.instantiate()
        obj.process_mode = Node.PROCESS_MODE_DISABLED
        obj.visible = false
        _parent.add_child(obj)
        _pool.append(obj)

func acquire() -> Node:
    for obj in _pool:
        if obj.process_mode == Node.PROCESS_MODE_DISABLED:
            obj.process_mode = Node.PROCESS_MODE_INHERIT
            obj.visible = true
            return obj
    # pool exhausted, grow
    var obj := _scene.instantiate()
    _parent.add_child(obj)
    _pool.append(obj)
    return obj

func release(obj: Node) -> void:
    obj.process_mode = Node.PROCESS_MODE_DISABLED
    obj.visible = false

The reason I avoided visible alone is that some pooled objects might be legitimately invisible while active: trigger zones, audio emitters, that kind of thing. PROCESS_MODE_DISABLED also has the nice side effect of stopping _process and _physics_process on inactive instances automatically, so they don't eat update budget while sitting in the pool.

The obvious weakness is the linear scan through _pool to find a free slot. For pools under ~50 objects it's negligible, but I haven't benchmarked it at scale yet. The clean fix is a separate free-list queue so acquire is O(1), but I'm trying to figure out if the scan is even a real problem before adding complexity.

The other thing I'm not sure about: the add_child call on the grow path. If the pool exhausts mid-frame and has to grow, that's a scene tree modification during gameplay. Haven't seen it spike yet but it's the kind of thing that would be annoying to debug later.

Anyone doing pooling in Godot 4? What's your approach? Specifically curious if the process_mode trick is going to bite me somewhere I'm not thinking of, and whether a free-list queue is worth it at the scale I'm describing.

Replying to ApexMesh: yeah this is the right call. the process_mode approach is clever but it's implic...

Agreed, and there's a debugging benefit that doesn't get mentioned enough. An explicit _in_pool flag is easy to log, inspect in Godot's remote debugger, or set a watch expression on. When something's misbehaving with a pooled object, you grep for _in_pool and see exactly where it gets written.

With process_mode as your availability signal, you're chasing every line in your entire project that sets PROCESS_MODE_DISABLED, which is a much bigger surface area, especially once pause menus, cutscene systems, and anything else that touches the scene tree are involved.

One thing to watch if you're planning to use this pool across scene transitions: make sure the pool node lives in an autoload or somewhere else persistent, not in the scene tree. I had a pool that worked perfectly within a level, then silently freed all its "inactive" instances on scene swap because the pool node got collected first. Projectiles would spawn after the first transition but with stale or default state because the pool was quietly re-initializing from scratch. Symptoms were subtle enough that it took me way too long to connect the behavior to the pool lifecycle.

If you do put it in an autoload, also worth explicitly calling some kind of clear() on level unload so you're not accumulating pooled nodes from the previous scene that no longer make sense in the new context.

The approach makes sense and I've seen similar things work fine, but one thing to watch: if anything else in your codebase sets PROCESS_MODE_DISABLED on a parent node, a pause menu, a cutscene system, a loading screen, every active projectile in that subtree suddenly looks free to the pool. Depending on how you check the flag that could mean a live bullet gets handed out mid-flight.

Might be worth using a dedicated is_pooled boolean property instead, or at least guarding the acquire logic with a position/velocity sanity check. The process_mode trick is clever but it's borrowing semantics from a flag that has other legitimate users.

Replying to ByteMoth: The approach makes sense and I've seen similar things work fine, but one thing t...

The parent process mode issue hit me in a slightly different way. I had a cutscene system that temporarily disabled a gameplay subtree, and pooled instances parented under that subtree got read as inactive by my pool's availability check — so they were re-acquired and double-used while still mid-animation from their previous activation. Only triggered during specific cutscene transitions, took an embarrassingly long time to isolate.

What actually fixed it was moving the pool to an autoload and not parenting pooled instances under any gameplay node at all. The scene tree is slightly less organized but pool state becomes completely independent of whatever the rest of the tree is doing. Probably the right architecture for pools that need to survive scene transitions anyway.

Replying to ByteMoth: The approach makes sense and I've seen similar things work fine, but one thing t...

The parent process mode issue is real. One more thing to watch: make sure your reset logic runs before you re-enable the instance, not after. I had a subtle bug where reclaim() set process_mode = INHERIT first, then called _reset() on the object, and if _reset() touched any state in _process, that callback would fire once with stale data before the reset completed.

Swapping the order to reset-then-enable fixed it immediately. Obvious in hindsight but it took an embarrassingly long time to spot.

Replying to AetherSage: yeah this caught me too. ended up dropping process_mode as the pool-membership s...

yeah this is the right call. the process_mode approach is clever but it's implicit — you're reading a side effect as if it were owned state, which will eventually surprise you in ways that are hard to reproduce. an explicit _in_pool flag is like 3 lines and eliminates a whole category of confusing bugs.

i also added a debug assert that fires if you try to acquire an instance that still has _in_pool == false. caught a double-acquire bug in about 30 seconds that would have been genuinely miserable to track down any other way.

Replying to ByteMoth: The approach makes sense and I've seen similar things work fine, but one thing t...

yeah this caught me too. ended up dropping process_mode as the pool-membership signal entirely and just keeping an explicit _in_pool := false flag on each instance. pool checks that before handing anything out. still sets PROCESS_MODE_DISABLED for the actual pause behavior, but the availability check doesn't depend on it — so a paused parent subtree can't silently make an active instance look free.

more explicit, less magic, fewer surprises.

Replying to VoidLeap: Agreed, and there's a debugging benefit that doesn't get mentioned enough. An ex...

yeah and once you have _in_pool explicitly, it costs almost nothing to add a debug method that prints active vs inactive counts. i have a pool.debug_stats() that logs total size, active, available, and peak active ever seen. spotted a leak immediately when "active" kept climbing and never dropped back after waves cleared.

also caught a double-reclaim bug that way. _in_pool was already true when reclaim got called. would've been completely silent without the flag. with it, instant assertion fail on the second call, stack trace pointing right at the problem.

Moonjump
Forum Search Shader Sandbox
Sign In Register