Methods

These are some techniques for making a sonification. While the most common approach is parameter mapping, it's certainly not the only method, and techniques may often overlap.

These methods do not require a computer-based tool (although there are many options available). "Punk" sonification, a manual approach, is also 100% valid.

1

Parameter Mapping

Parameter mapping is the process of translating data values to various audio characteristics, such as pitch, loudness, duration, tempo, timbre, decay, etc. With this method, variables in a data set (whether numeric or categorical) get “mapped” to sound parameters to convey the nature of the data’s trends in an audible format.

Some ideas to get you thinking...

  • Pitch mapping: assigning numeric values to pitch, larger numbers = higher pitch (or vice versa).

  • Timbre mapping: matching categorical values to different musical instrument, each with a distinct tonal quality and texture.

  • Volume mapping: making a sound louder when a data value is present or more prevalent (or vice versa).

Here is an example of a parameter mapping sonification from KQED Science: Hear Climate Data Turned into Music. CO2 concentration values are mapped to the pitch of a continuous tone, and temperature average values are mapped to the pitch and intensity of plucked string sounds.

Here is another example of parameter mapping called Listen to Wikipedia (Note: click “enable sound?” in the top right of the screen to listen). This sonification takes Wikipedia’s recent changes feed and maps edit size to pitch — the larger the edit, the deeper the note. Additions are representations by bells, subtractions are represented by string plucks, and new user arrivals are represented by string swells.

2

Auditory Icons

Auditory icons are sounds which are derived from the real world, and provide the listener with an intuitive association between the sound and some information. Imagine sounds like paper crumpling and being tossed, a heart beating, a broom sweeping up leaves…. These can all be used as sonic metaphors in sonification, marking certain events or occurrences within the data. Auditory icons create an analogy between the real world and the digital counterpart.

In this episode of the Loud Numbers podcast, Duncan Geere and Miriam Quick share a sonification of wildfire occurrences in Canada during 2023, one of North America’s largest fire seasons to date. There are several layers to the sonification, including these auditory icons: fires started by humans are represented by the sound of a Zippo lighter, and fires that were started naturally are represented by the sound of wood crackling.

Here’s an episode about auditory icons from the Twenty Thousand Hertz podcast. It delves into the design behind many of the sounds we associate with information on a daily basis.

3

Earcons

Earcons, unlike auditory icons, are sounds which are abstract and thus unassociated with the activities of daily life. A simple “ping” or “bop” might not mean anything on its own, but can be attached to an event which a listener learns to associate with a piece of information. Take for example the ubiquitous iPhone “ding,” which indicates the a text message has just arrived.

In “Nine Rounds a Second: How the Las Vegas Gunman Outfitted a Rifle to Fire Faster,” New York Times journalists used a single earcon — a brief, muted, subtle tone — to compare the firing rates of the weapons used during the Las Vegas shooting (2017) and Orlando nightclub shooting (2016) to that of an automatic weapon.

In an older archived piece from The New York Times, "Fractions of a Second: An Olympic Musical," a single earcon is used to compare the finishing times of athletes during the 2010 Winter Olympics across four different sports for men and women. Since the focus here is timing, it’s helpful for the listener to hear a single tone and focus on how frequently it repeats.

4

Manual / "Punk" Sonification

Contrary to popular belief, you don't need an algorithmic tool or computer program to practice data sonification. You can use manual methods, such as recording your own voice! Check out this example from Jordan Wirfs-Brock called "Voicing a heartbeat (duet)" — two recorded voices speak a "ba-dum" along with their heartbeat data on a particular day.

[disclaimer that this can combine any of the above methods, the only difference is that it is manual / sounds are recorded punk style - with phone or live with mouth etc.]

A go-to resource to learn about nontraditional sonification is the Open Sonifications Manifesto (Wirfs-Brock, Geere, & Perera), which champions sonification in all its multitudinous forms. It proposes a sonification community which does not exclude, but rather welcomes approaches from all minds and skill levels:

"Open Sonifications is not about high-end production but raw creative energy. This can mean using everyday objects, instead of high technology, to achieve what you want to achieve. For example, logging your data using paper and pencil rather than a spreadsheet, or creating sounds using your body and voice rather than with code."

Check out the supplementary materials which accompany the (downloadable ZIP file). Item 6 is a "data and sound recipe" from Jordan Wirfs-Brock which utilizes personal data and vocal performance as a sonification approach!

5

Audification

Audification is the process of taking the original data signal from a natural source and changing the timescale so that it is humanly audible. Common applications include the audification of seismic data to hear the impact of earthquakes, or of astronomical data to notice patterns in solar wind and beyond. Consider for example this audification of the Tohoku Earthquake in Japan, 2011. The ground motion signal over the course of two days is accelerated and condensed into an audio segment of 120 seconds. Thus the magnitude of the earthquake is comprehensible to the human ear.

Here is another useful example and explanation of audification from Robert Alexander at NASA. He explains how converting solar wind signal data directly to audio and compressing it into a short time frame reveals certain activity like solar storms, which wouldn’t be noticeable otherwise.

6

Model-based Sonification

Last updated