Defining the limits of perceptibility is an important part of stealth game design. Different stealth games tend focus on some varieties of perception more than others: light and dark, line of sight, sound levels, speed of movement, etc.
A lot of gameplay in intelAgent involves using distractions to manipulate guards, and the majority of those distractions are sound-based, so it seemed important to have good predictability for the extents of sound perceptibility and high fidelity for the player to manipulate those extents (by opening and closing doors, mainly).
This could be achieved in a variety of ways, but the approach that drew me in was in the direction of simulation. I wanted to try and model the basics of sound propagation, which seemed intuitively to be reflection off, and transmission through, surfaces (walls, doors and windows for our purposes, since our levels will be mechanically 2D). The most obvious way to implement this was simple path-tracing: fire many rays out from the sound source at regular intervals and bounce them off and through walls up to a maximum number of times. My initial implementation just did specular reflection and used a fairly massive number of rays, taking almost 2 seconds to do a full update. The results of this were somewhat disheartening: it was far too slow and the coverage of the sound didn’t even feel very realistic.
I started looking at actual academic and industry work on sound propagation for virtual environments. It turns out that diffraction actually is a somewhat prominent part of perceptible sound propagation, since human-hearing can perceive a wide range of wavelengths, and most noises generate a range of frequencies, the lower-ends of perceptible noises will easily propagate around corners by diffraction of the wave (resulting in “muffled”, bass-y audio). My next attempt was to simply add diffraction to the existing solution, which was of course made it many times slower, but did significantly increase the coverage, so that felt nicer. All of this is still without modelling transmission directly through walls, which I started to realise was probably something to avoid anyway, since I suspected it would make sound propagation less predictable when players were trying to manipulate it.
I wanted to try and get the performance up to a level where players could manipulate the model it in at least ersatz real-time, with the update period hidden by animation. I experimented with partial updates (just re-tracing the areas of the world where the wall had changed), but the complexity of the code and data required got out of control and I couldn’t get it to work reliably.
A couple of the more recent pieces of the academic writing I had looked at had mentioned using methods similar to the Metropolis Light Transport algorithm, which I had dismissed offhand as “too much maths” for me (I have basically no background in mathematics and algorithms, so if they aren’t explained in terms of actual code or detailed in layman’s terms I shy away). But since my naive solution seemed to be no-go, I decided to take a stab at understanding what this intriguingly named algorithm was all about. Really, that name is what drew me in I think.
I have very little idea how close I got to implementing a Metropolis-Hastings Monte Carlo path tracing algorithm, but what I have now performs great, gets great coverage, and makes sense to me much more than I thought it ever could when I started diving into papers full of strange curvy symbols and inexplicably limited vocabularies when it comes to naming variables (so many dX, dY, dGreeks).
I’ll attempt to summarise the algorithm. It starts out by taking a few stabs at propagating ray paths (representing a path that a sound wave could take) through the environment, picking directions at random when possible (points where the wave would be close enough to a corner to diffract around it, in our case), and see how far these attempts get. Then, by making usually small, and sometimes larger, changes to the paths (making different choices, or throwing out the entire path and starting a new one), the algorithm tries to find paths that get better coverage. If a path with better coverage is found, this becomes the new path to base changes on. There is also a chance that paths that don’t get better coverage may be selected to become the new base path, by rolling a number between 0 and 1, and seeing if that number is smaller than the worseness ratio of the new path vs. the old path: this allows the algorithm to find new corners of the area that sound would get to.
That’s All For Now
There’s still room for improvement: I’d like to get it updating even faster, which I think could be simply achieved by caching a few datasets, since often it seems the updates will be a single door opening and closing as a guard walks through it. I think good results could also be achieved by outputting the results of the algorithm progressively, so if the environment is changing a lot in quick succession time isn’t wasted generating data that will just be thrown out, plus visible results will appear quicker (the initial low detail can hopefully be smoothed over with graphical flair).
This is all for a set of game mechanics that perhaps don’t entirely justify all this effort, but I think it was worth it to explore on it’s own merits.