# Tools

There are a wide range of data sonification tools available, each with varying levels of time investment, knowledge requirements, and output capabilities. Some tools are easy to use and do not require any coding experience. Other tools have a steeper learning curve and require some time to set up and read the documentation. There are options for everyone. 🔧

While more advanced tools may provide additional customization capability, they are not necessary for creating excellent sonification pieces. It’s best to start at your comfort level and go from there. Find methods that work best for you and stay open to experimentation. This is a creative process! 👂\
\
**Hop to a section 🐸 👇**

<table data-column-title-hidden data-view="cards" data-full-width="false"><thead><tr><th></th></tr></thead><tbody><tr><td><a href="#web-applications-and-softwares-level-easy">Easy Tools</a></td></tr><tr><td><a href="#dev-environments-and-softwares-level-intermediate">Intermediate/Advanced Tools</a></td></tr></tbody></table>

<table data-column-title-hidden data-view="cards" data-full-width="false"><thead><tr><th></th></tr></thead><tbody><tr><td><a href="#audio-editing-tools">Audio Editing Tools</a></td></tr><tr><td><a href="#audio-sample-resources">Audio Sample Resources</a></td></tr></tbody></table>

***

## Sonification Tools

### Web Applications and Softwares (Level: <mark style="color:green;">Easy</mark>)

<table data-full-width="true"><thead><tr><th width="148.16796875">Tool</th><th>Description</th><th>Specs</th><th width="100" data-type="content-ref">URL</th></tr></thead><tbody><tr><td>TwoTone</td><td><em>Web-based, no-code sonification tool. Users can adjust instrument, key, octave range, and tempo to customize their sonification output. Additional layers of music can be added, as well as narration audio.</em></td><td><strong>Level</strong>: <mark style="color:green;"><strong>Easy</strong></mark><br><strong>Platform</strong>: Web browser<br><strong>Output Formats</strong>: MP3, WAV, PCM<br><strong>Documentation</strong>: <a href="https://twotone.io/how-it-works/">Getting Started</a>, <a href="https://twotone.io/tutorials/">Tutorials</a>, <a href="https://twotone.io/examples/">Examples</a>, <a href="https://twotone.io/about/">About</a>.</td><td><a href="https://twotone-midiout-beta.netlify.app/">https://twotone-midiout-beta.netlify.app/</a></td></tr><tr><td>Highcharts Sonification Studio</td><td><em>Web-based, no-code sonification tool that generates dynamic audio-visual charts. It allows the user to customize both the visual settings of the chart, as well as a wide range of audio specifications (global and series-specific settings). Adjustable audio parameters include duration, precision, range, instrument, pitch, volume, panning, and more.</em></td><td><strong>Level</strong>: <mark style="color:green;"><strong>Easy</strong></mark><br><strong>Platform</strong>: Web browser<br><strong>Output Formats</strong>: Video, Audio Only, Audio as MIDI, Image, Vector Image, CSV Data, Text Description, Highcharts JS Config, HTML file<br><strong>Documentation</strong>: <a href="https://sonification.highcharts.com/#/tutorial">Tutorial</a>, <a href="https://hss-tutorials.github.io/">Community page video tutorials</a>, <a href="https://www.youtube.com/@HSS-Tutorials">YouTube channel</a>, <a href="https://www.highcharts.com/forum/viewtopic.php?f=9&#x26;t=46101">feedback thread</a>.</td><td><a href="https://sonification.highcharts.com/">https://sonification.highcharts.com/</a></td></tr><tr><td>Data Sonifyer</td><td><em>Web-based, no-code sonification app developed by Christian Basl (of</em> <a href="https://www.instagram.com/sonifriday/"><em>SoniFriday</em></a><em>, a sonification duo with Berit Kruse). It has an intuitive and simple user interface, with helpful documentation in the pop-out side panel. It allows the user to upload CSV data, and adjust audio parameters such as instrument, tempo, frequency, amplitude, filter, envelope, rhythm, and effect (see “add sound module” button). The user can export the sonification output by using the “record” feature in the app and downloading the resulting WAV file.</em></td><td><p><strong>Level</strong>: <mark style="color:green;"><strong>Easy</strong></mark></p><p><strong>Platform</strong>: Web browser<br><strong>Output Formats</strong>: WAV<br><strong>Documentation</strong>: <a href="https://datasonifyer.de/en/beispiele/">Examples</a>, <a href="https://datasonifyer.de/en/learn/">Tutorials</a>, <a href="https://datasonifyer.de/en/discover/">Discover</a>, <a href="https://datasonifyer.de/en/about/">About</a><br><strong>Note</strong>: <em>To find help within the app, click on the musical notes 🎵 in the upper left corner for the how-to side panel to appear.</em></p></td><td><a href="https://studio.datasonifyer.de/en">https://studio.datasonifyer.de/en</a></td></tr><tr><td>csv-to-midi</td><td><em>Web application created by</em> <a href="https://evanking.io/posts/csv-to-midi/"><em>Evan King</em></a> <em>that allows a user to upload a CSV file, adjust audio parameters, and export a MIDI file. Adjustable audio parameters include duration, musical key, musical scale, and note range. For inspiration, check out Evan King’s sonification of sea level data in a project called “Bait/Switch.” This ambient composition transforms sea level data into a digital underwater soundscape.</em></td><td><strong>Level</strong>: <mark style="color:green;"><strong>Easy</strong></mark><br><strong>Platform</strong>: Web browser<br><strong>Output Formats</strong>: MIDI<br><strong>Documentation</strong>: <a href="https://github.com/evmaki/csv-to-midi">GitHub Documentation</a><br><strong>Note</strong>: <em>Click the question mark symbol ❓ in the upper right corner of the webpage for a brief explanation of the tool.</em></td><td><a href="https://csv-to-midi.evanking.io/">https://csv-to-midi.evanking.io/</a></td></tr><tr><td>Data Mapper</td><td><em>Tool that allows users to linearly map a CSV file to a range of values that you supply. The user defines a list of musical notes in ISO (International Standards Organization) format (e.g. B3, C4, D4, etc.), and the CSV data gets mapped to the appropriate note.</em></td><td><strong>Level</strong>: <mark style="color:green;"><strong>Easy</strong></mark><br><strong>Platform</strong>: Web browser (Observable)<br><strong>Output Format</strong>: Information (list of musical notes)</td><td><a href="https://observablehq.com/@duncangeere/data-mapper">https://observablehq.com/@duncangeere/data-mapper</a></td></tr><tr><td>StarSound</td><td><em>Downloadable sonification application compatible with Mac OS X. It is designed as a standalone tool for sonifying multidimensional datasets. The interface includes a visualization of the uploaded data and selected variables, and an array of modules for customizing the audio output. Within a given module, users can adjust frequency, loudness, duration, instrument, and more. The application offers a range of play modes, allowing users to play, loop, or record their sonification. Developed by</em> <a href="https://www.jeffreyhannam.com/"><em>Jeffrey Hannam</em></a><em>.</em></td><td><strong>Level</strong>: <mark style="color:green;"><strong>Easy</strong></mark><br><strong>Platform</strong>: Downloadable software<br><strong>Output Formats</strong>: WAV<br><strong>Documentation</strong>: <a href="https://youtu.be/VOuTKQs9j7I?si=CxCY66BjHUBCRJC6">StarSound Tutorial</a></td><td><a href="https://www.jeffreyhannam.com/starsound">https://www.jeffreyhannam.com/starsound</a></td></tr></tbody></table>

### Dev Environments and Softwares (Level: <mark style="color:yellow;">Intermediate</mark>)

<table data-full-width="true"><thead><tr><th width="178.36328125">Tool</th><th>Description</th><th>Specs</th><th width="100" data-type="content-ref">URL</th></tr></thead><tbody><tr><td>Sonic Pi</td><td><em>Downloadable software, designed as a tool for code-based music creation and performance. It is highly flexible, allowing for audio in/out, MIDI in/out, Open Sound Control (OSC) in/out, and interaction with Ableton Live. Sounds produced from Sonic Pi are extremely customizable, offering a wide range of parameter customization, sample manipulation, instrument selection, programming structures, and effects. While the tool is geared towards “live coding,” it can be used to design and export a data-driven composition. Developed by</em> <a href="https://github.com/samaaron"><em>Sam Aaron</em></a><em>.</em></td><td><strong>Level</strong>: <mark style="color:green;"><strong>Easy</strong></mark><strong>/</strong><mark style="color:yellow;"><strong>Intermediate</strong></mark><br><strong>Platform</strong>: Downloadable software<br><strong>Output Formats</strong>: WAV, live playback<br><strong>Documentation</strong>: <a href="https://sonic-pi.net/tutorial.html">Sonic Pi Tutorial</a>, <a href="https://sonic-pi-studio.teachable.com/p/sonic-pi-introduction">Sonic Pi Course</a> ($25), <a href="https://in-thread.sonic-pi.net/">Sonic Pi Community</a>, <a href="https://in-thread.sonic-pi.net/t/sonic-pi-online-resources/17">List of Resources for Learning Sonic Pi</a>, <a href="https://www.youtube.com/@SamAaron">Sam Aaron’s YouTube Channel</a>, and many more tutorials on YouTube.</td><td><a href="https://sonic-pi.net/">https://sonic-pi.net/</a></td></tr><tr><td>MIDITime (Python)</td><td><em>Python library that converts any kind of time series data into pitch, velocity and duration values based on musical options adjusted by the user. Developed by</em> <a href="https://mappingprejudice.umn.edu/about-us/team/michael-corey"><em>Michael Corey</em></a><em>. MIDITime was used to produce the data sonification in</em> <a href="https://revealnews.org/podcast/power-struggle-the-perilous-price-of-americas-energy-boom/#segment-oklahomas-man-made-earthquakes"><em>this episode of Reveal</em></a><em>.</em></td><td><strong>Level</strong>: <mark style="color:yellow;"><strong>Intermediate</strong></mark><br><strong>Platform</strong>: Python package / dev environment<br><strong>Output Formats</strong>: MIDI file<br><strong>Documentation</strong>: <a href="https://pypi.org/project/miditime/">https://pypi.org/project/miditime/ </a></td><td><a href="https://github.com/mikejcorey/miditime">https://github.com/mikejcorey/miditime</a></td></tr><tr><td>audiolazy (Python)</td><td><em>Python package for digital audio signal processing (DSP). Helpful for converting MIDI and frequency values. Used in Matt Russo's</em> <a href="https://youtu.be/DUdLRy8i9qI?si=86U2XBw5sB3bIKQf"><em>sonification tutorial</em></a><em>.</em></td><td><strong>Level</strong>: <mark style="color:yellow;"><strong>Intermediate</strong></mark><br><strong>Platform</strong>: Python package / dev environment<br><strong>Output Formats</strong>: MIDI file<br><strong>Documentation</strong>: <a href="https://medium.com/@astromattrusso/sonification-101-how-to-convert-data-into-music-with-python-71a6dd67751c">Convert music to data with Python</a>, <a href="https://pypi.org/project/audiolazy/">audiolazy docs</a></td><td><a href="https://pypi.org/project/audiolazy/">https://pypi.org/project/audiolazy/</a></td></tr><tr><td>MIDIUtil (Python)</td><td><em>Python library for writing MIDI files. Used in Matt Russo's</em> <a href="https://youtu.be/DUdLRy8i9qI?si=86U2XBw5sB3bIKQf"><em>sonification tutorial</em></a><em>.</em></td><td><strong>Level</strong>: <mark style="color:yellow;"><strong>Intermediate</strong></mark><br><strong>Platform</strong>: Python package / dev environment<br><strong>Output Formats</strong>: MIDI file<br><strong>Documentation</strong>: <a href="http://midiutil.readthedocs.io/">Read the Docs</a>, <a href="https://pypi.org/project/MIDIUtil/">MIDIUtil Python page</a></td><td><a href="https://pypi.org/project/MIDIUtil/">https://pypi.org/project/MIDIUtil/</a></td></tr><tr><td>Astronify (Python)</td><td><em>Python package for sonifying astronomical data. It sonifies light curve data by representing changes in brightness as changes of pitch. Users can supply a data table containing two columns representing time and flux. Various parameters can be adjusted, and the program uses a default algorithm that converts data (an array of float values) into pitch (values in Hz).</em></td><td><strong>Level</strong>: <mark style="color:green;"><strong>Easy</strong></mark><strong>/</strong><mark style="color:yellow;"><strong>Intermediate</strong></mark><br><strong>Platform</strong>: Python package / dev environment<br><strong>Output Formats</strong>: Audio files (WAV, etc.)<br><strong>Documentation</strong>: <a href="https://astronify.readthedocs.io/en/latest/astronify/install.html">Installation</a>, <a href="https://astronify.readthedocs.io/en/latest/astronify/index.html">Documentation</a>, <a href="https://astronify.readthedocs.io/en/latest/astronify/api.html">API</a>, <a href="https://astronify.readthedocs.io/en/latest/astronify/tutorials.html">Tutorials</a>, <a href="https://github.com/spacetelescope/astronify">GitHub</a>, <a href="https://stsci.app.box.com/s/39y5185udfvcxof89a242p7g0qowfww8">sonification examples</a>, <a href="https://www.youtube.com/playlist?list=PLCPZgcYzVpj9dARO11HCUD_YxXw4fr_ef">explanatory videos</a> </td><td><a href="https://astronify.readthedocs.io/">https://astronify.readthedocs.io/</a></td></tr><tr><td>STRAUSS (Python)</td><td>A flexible Python package for sonification, aimed towards data analysis and/or accessible communications. It allows for highly customizable parameter mapping and even spectral audification. Users can synthesize audio or manipulate sound samples. Built by astrophysicists at <a href="https://www.audiouniverse.org/">Audio Universe</a>. </td><td><strong>Level</strong>: <mark style="color:green;"><strong>Easy</strong></mark><strong>/</strong><mark style="color:yellow;"><strong>Intermediate</strong></mark><br><strong>Platform</strong>: Python package / dev environment<br><strong>Output Formats</strong>: Audio files (WAV, etc.)<br><strong>Documentation</strong>: <a href="https://strauss.readthedocs.io/en/latest/index.html">Read the Docs</a>, <a href="https://www.audiouniverse.org/research/strauss">Audio Universe page</a>, <a href="https://www.youtube.com/@audiouniverse8137">YouTube channel</a>, <a href="https://strauss.readthedocs.io/en/latest/examples.html">Google Colab tutorials</a>. </td><td><a href="https://strauss.readthedocs.io/">https://strauss.readthedocs.io/</a></td></tr><tr><td>p5 Sound / p5.js</td><td></td><td></td><td></td></tr><tr><td>Tone.js (JavaScript)</td><td></td><td></td><td></td></tr><tr><td>Erie</td><td><em>A "declarative grammar for data sonification," consisting of</em> <a href="https://see-mike-out.github.io/erie-editor/"><em>Erie for Web</em></a> <em>and</em> <a href="https://erie.jshttps/see-mike-out.github.io/erie-documentation/"><em>Erie.js</em></a><em>. Erie for Web allows users to generate sonification design specifications that can be played back in an online editor.</em></td><td><strong>Level</strong>: <mark style="color:yellow;"><strong>Intermediate</strong></mark><br><strong>Platform</strong>: Web browser, dev environment<br><strong>Output Formats</strong>: Web-embedded audio with extension, Player API<br><strong>Documentation</strong>: <a href="https://see-mike-out.github.io/erie-editor/">Erie Editor</a>, <a href="https://see-mike-out.github.io/erie-documentation/">Erie Documentation</a>, <a href="https://see-mike-out.github.io/erie-editor/paper/">Research Paper</a>, <a href="https://github.com/see-mike-out/erie-web">GitHub</a></td><td><a href="https://see-mike-out.github.io/erie-editor/">https://see-mike-out.github.io/erie-editor/</a></td></tr><tr><td>FoxDot</td><td></td><td></td><td></td></tr><tr><td>SuperCollider</td><td></td><td></td><td></td></tr><tr><td>Pure Data</td><td></td><td></td><td></td></tr><tr><td>Csound</td><td></td><td></td><td></td></tr><tr><td>ChucK</td><td><em>A programming language for real-time sound synthesis and music creation. Developed by a</em> <a href="https://chuck.stanford.edu/doc/authors.html"><em>team</em></a> <em>at Stanford's CCRMA. The web-based version is</em> <a href="https://chuck.stanford.edu/webchuck/"><em>WebChucK</em></a><em>.</em></td><td><strong>Level</strong>: <mark style="color:yellow;"><strong>Intermediate</strong></mark><br><strong>Platform</strong>: Dev environment<br><strong>Output Formats</strong>: Real-time audio (DAC output), WAV, or data types<br><strong>Documentation</strong>: <a href="https://chuck.stanford.edu/doc/">ChucK documentation</a>, <a href="https://github.com/ccrma/chuck">GitHub</a>, <a href="https://youtube.com/playlist?list=PL-9SSIBe1phI_r3JsylOZXZyAXuEKRJOS&#x26;si=Wq8jlpGHf0i8KDax">"Creating Electronic Music with ChucK"</a> tutorial series on YouTube, <a href="https://chuck.stanford.edu/doc/examples/">Examples</a>, <a href="https://ccrma.stanford.edu/courses/220a-fall-2018/homework/1/">sonification homework</a> from Computer-Generated Sound course at Stanford, ChucK <a href="https://chuck.stanford.edu/community/">community</a>.</td><td><a href="https://chuck.stanford.edu/">https://chuck.stanford.edu/</a></td></tr><tr><td>Max / MSP ($)</td><td></td><td></td><td></td></tr><tr><td>Manifest Audio Sonification Bundle (Ableton $)</td><td></td><td></td><td></td></tr></tbody></table>

***

## Audio Editing Tools

### Audio Editing Softwares

<table><thead><tr><th width="175.9765625">Audio Editor</th><th width="334.15234375">Description</th><th>URL</th></tr></thead><tbody><tr><td>GarageBand</td><td>GarageBand is a digital audio workstation (DAW) that is included with macOS.</td><td><a href="https://www.apple.com/mac/garageband/">https://www.apple.com/mac/garageband/</a></td></tr><tr><td>Logic Pro X</td><td>Logic Pro X is a more advanced digital audio workstation (DAW) for the macOS. ($)</td><td><a href="https://www.apple.com/logic-pro/">https://www.apple.com/logic-pro/</a></td></tr><tr><td>Signal</td><td>Signal is an open source online MIDI editor.<br><br>It allows users to apply an instrument and effects to MIDI values, and export as MP3 or WAV.</td><td><a href="https://signal.vercel.app/edit">https://signal.vercel.app/edit</a></td></tr><tr><td>Online Sequencer</td><td><p>Online Sequencer is an open source online MIDI and audio editor.</p><p><br>It allows users to import MIDI or audio, edit/customize, and export as MP3, WAV, or MIDI.</p></td><td><a href="https://onlinesequencer.net/">https://onlinesequencer.net/</a></td></tr><tr><td>Audacity</td><td>Audacity is a free, downloadable software for recording and editing audio. Compatible with Windows, macOS, and Linux. </td><td><a href="https://www.audacityteam.org/">https://www.audacityteam.org/</a></td></tr><tr><td>REAPER</td><td></td><td><a href="https://www.reaper.fm/">https://www.reaper.fm/</a></td></tr><tr><td>Ableton Live</td><td></td><td><a href="https://www.ableton.com/en/live/">https://www.ableton.com/en/live/</a></td></tr><tr><td>Adobe Audition</td><td></td><td><a href="https://www.adobe.com/products/audition.html">https://www.adobe.com/products/audition.html</a></td></tr><tr><td>DaVinci Resolve (Fairlight)</td><td></td><td><a href="https://www.blackmagicdesign.com/products/davinciresolve">https://www.blackmagicdesign.com/products/davinciresolve</a></td></tr><tr><td>iZotope</td><td>iZotope offers a suite of audio editing products</td><td><a href="https://www.izotope.com/en/products">https://www.izotope.com/en/products</a></td></tr><tr><td>Studio One</td><td></td><td><a href="https://www.presonus.com/pages/studio-one-pro">https://www.presonus.com/pages/studio-one-pro</a></td></tr><tr><td>Ocenaudio</td><td></td><td><a href="https://www.ocenaudio.com/">https://www.ocenaudio.com/</a></td></tr><tr><td>Auphonic</td><td></td><td><a href="https://auphonic.com/">https://auphonic.com/</a></td></tr><tr><td>Soundtrap</td><td></td><td></td></tr><tr><td>Soundation</td><td></td><td></td></tr></tbody></table>

***

## Audio Sample Resources

<details>

<summary>FreeSound.org</summary>

</details>

<details>

<summary>99Sounds</summary>

</details>

<details>

<summary>freeSFX</summary>

</details>

<details>

<summary>Partners In Rhyme</summary>

</details>

***


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://www.sonificationkit.com/data-sonification/tools.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
