Sound and data parameters
Understanding the variety of ways data can be represented as sound.
Last updated
Understanding the variety of ways data can be represented as sound.
Last updated
When creating a data sonification, there are many choices to be made regarding what aspects of the data are converted to sound, and what characteristics of sound are used in this representation. Matt Russo describes some data-related choices here, and Jordan-Wirfs Brock shares some dimensions of audio here.
Mapping function / data selection = which variables of the data set are getting converted to sound?
Polarity = what is the direction of relationship between your data values and audio parameters? (E.g. matching bigger numbers with higher pitch/volume versus matching bigger numbers with lower pitch/volume)
Range = the span of audio values to which the data is transferred (e.g. what range of musical notes, what volume range, etc.)
Scaling = mathematical relationship between data and audio parameters (e.g. linear, logarithmic, etc.)
Dimensions of sound to represent the data:
Pitch = note frequency, i.e. "highness" vs. "lowness"
Timbre / texture = the quality of a sound or tone, the distinct "color" of a sound
Loudness / volume = perceived level of sound that is heard by the listener
Tempo = rhythm and cadence with which sound is played
Duration = the lenght of time the sound lasts
Panning / stereo image = the position of audio from left to right speaker / headphones