Listening to
a galaxy.
Sgr A* does not make a sound. There is no medium between us. The sonification is a translation, with rules. The rules are the point.
A FITS file is a sequence of numbers. So is a sound. Translating between them is not a metaphor. It is a mapping, and a mapping has a definition. 1snob AstroRoom's mapping is published with the audio. So is the source.
— 01Why sonification at all
Two reasons. The first is access — a person who is blind cannot read the picture but can hear the spectrogram. The second is recognition — the human ear notices repeating patterns in temporal data faster than the human eye notices the same pattern in a static image. A periodic signal at the millihertz scale, the kind you'd find in an X-ray binary, jumps out the moment you hear it and is hard to find by squinting at a light curve.
— 02How 1snob AstroRoom maps light to sound
1snob AstroRoom uses STRAUSS — a multi-band sonification engine published in JOSS in 2025 — wired to its FITS pipeline. The mapping is fixed for a given preset and published with the audio. The default preset for Sgr A* and similar multi-wavelength targets is:
X-ray → high-frequency tones, 1,200 to 2,400 Hz, spatialised at the centre of the stereo image.
Radio → low bass drones, 80 to 300 Hz, wide stereo field.
Infrared → mid-range tones, 400 to 900 Hz, moderate stereo spread.
Flares → percussive transients triggered by wavelet-detected peaks in the X-ray light curve.
The mapping table is the docstring at the top of backend/app/services/sonification_strauss.py.
— 03What you actually hear
A typical Sgr A* clip is a low radio drone underneath a quieter mid-range infrared layer, with bright X-ray pings overhead that flare into clusters during the high-X-ray epochs of 2013 and 2019. When the radio band drops out because the relevant observation didn't include it, the bass disappears mid-clip — and the silence is a piece of information: it says "no radio data here", which is something you knew already but now you feel.
— 04What the rules don't allow
We do not pitch-shift to make things "sound better". We do not add reverb that the data doesn't justify. We do not synthesise a melody to keep the listener engaged. The mapping is a mapping. If the result is monotonous, that is information about the data, not a flaw to be fixed. If the result is dramatic, the drama is in the photons. The sonification preset name is attached to every clip we publish, alongside the FITS source IDs. The listener can rebuild the same audio from the same data.
— 05Why this matters for the picture
A sonification is a different reading of the same source. A different sense catches different patterns. A working astronomer can use a sonification to search a long light curve for transients faster than a visual scan. A blind researcher can read a spectrogram. A documentary can score itself with the universe's own translation rather than a synthesiser pretending to be one. The picture is no longer just a picture; it has a parallel.