The pattern that finally untangled my perception system: make the data flow strictly unidirectional. Stimuli get emitted into the environment (sound events, visual alerts, footstep markers, whatever makes sense for your game) and the perception layer subscribes and accumulates. No enemy ever actively queries whether it can see or hear the player. The environment pushes relevant data inward.
This inverts the coupling in a useful way. Your AI just consumes a perception feed instead of doing its own spatial lookups. Adding a new stimulus type is just adding a new emitter, not auditing 12 different enemy perception components to see what needs updating.
The friction point is stimulus lifetime. A noise event from frame 40 shouldn't still be influencing decisions at frame 400, so you need a decay mechanism. I use exponential decay with per-stimulus half-lives, which works well, but it does mean maintaining a priority queue of decaying signals per agent, which adds overhead at scale. Worth it for the structural cleanliness, but worth knowing about upfront.

