In the World of Grains – Part 3
In the World of Grains – Part 3
(contains embedded video) If you want to support my work, please make use of the "PayPal" button - thank you very much indeed!
The first time I came upon Sonic Pi was, when I did my first steps into the world of the Raspberry Pi mini one-board computer. Sonic Pi comes with the installation download of the little computer´s operation system. But it´s available for Windows and for macOS as well.
Sonic Pi is meant to be an instrument for live coding, but because of its easy syntax it also suits well for someone, who wants to do their first steps into coding sound in general.
The functionality of the language in matters of granular sound processing includes: generating grains (down to a few miliseconds) from a sampled sound, you can give these grains an ADSR envelope, you can change the playback speed as well as the playback direction (forward – backward), and you can change the grain´s pitch – but only across a range of plus/minus 2 octaves, and only in steps of halftones, you can randomise the playback start, and there is even a (quite basic and limited) way to do some granular processing in real-time.
There is a bunch of really useful and well made documentations, containing the Sonic Pi tutorial by Sam Aaron, the e-book “Essentials - Code Music with Sonic Pi” by the same author, and some informative forums.
SuperCollider is a whole programming environment, and not just a computer language. It´s first version was release in 1996 by James McCarthy. SuperCollider consists of 3 main components:
the real-time audio server (scsynth)
the actual computer language (sclang)
a text editor (scide) including a help system.
It is open-source and multi-platform (under GPLv2, 2002). Its most up-to-date version (now in May 2020) is version 3.11.0.
A lot of influential composers have used it to built their own applications (not only) to practice granular sound processing (indeed I don´t know any composer, who was or is involved in granular sound processing, and didn´t/doesn´t use SuperCollider
more or less often – and I know “a few” of them great musicians).
Let me dive a bit deeper in the SuperCollider system here therefore.
Two of the three components build a client-server structure: you write the code, and tell SuperCollider what it shall do. This happens in the sclang, which acts as the client. Then these coded demands of the users/programmers are sent to the scsynth, which acts as the client. Here the sounds are generated, the music is made audible.
The communication between the client/user/programmer (sclang) and the server (scsynth) follows the OSC (Open Sound Control) protokol (developed by Adrian Freed and Matt Wright), a “protokol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology” (from: The OpenSound Control OSC, see resources). This means, that you can use any device to make use of the sclang component: diverse controllers, other synthesizers, another (a second, a third …) computer, even other instants of SuperCollider.
You see the networking abilities, don´t you?
But don´t worry: this communication between client and server is going on more or less automatically “behind the scenes”. The user can focus on setting up their code, their music, their demands concerning the sound (see linked video for more).
At the heart of SuperCollider are the so called “unit generators” (UGen), which are functions you cann call to generate, modulate, operate the sound. There are ,ore than 250 of them in the current version 3.11.0 (now in May 2020) devided in the following groups:
periodic and aperiodic sources
delays and buffer manipulations
controll envelopes, triggers, counters, gates, lags, decays
You see, there is a whole group (group 7) of functions concerned with granular synthesis.
SuperCollider is equipped with sophisticated technology to operate sound in the buffer, but can deal with real-time granular synthesis as well.
For those of you, who want to start with SuperCollider, there is a great series of 24 tutorials by Eli Fieldsteel on YouTube.
In the next part I am going to introduce the most important computer languages with graphical user interfaces – most important for specifically coding sound processing. Let me repeat: I am not going to talk about “unspecific” computer languages, languages which are universal, like C, C++ C#, Java etc. here in this series. We can see how mighty Java can get even for coding DSP and sound processing in general, when we look at Cherry Audio´s Voltage Module Designer, which is based on Java. There´s an interview I did with Andrew Macaulay, one of the Voltage Modular module producers. Learn more about this here: https://www.dev.rofilm-media.net/node/347 and here: https://www.dev.rofilm-media.net/node/360.
to be continued.
to part 1: ("A Short History of Granular Synthesis - Part 1"):https://www.dev.rofilm-media.net/node/340
to part 2: ("A Short History of Granular Synthesis - Part 2"): https://www.dev.rofilm-media.net/node/342
to part 3: ("A Short History of Granular Synthesis - Part 3"): https://www.dev.rofilm-media.net/node/346
to part 4: ("A Short History of Granular Synthesis - Part 4"): https://www.dev.rofilm-media.net/node/356
to part 5 ("In the World of Grains - Part 1"): https://www.dev.rofilm-media.net/node/364
to part 6 ("In the World of Grains - Part 2"): https://www.dev.rofilm-media.net/node/373
to "In the World of Grains" part 4: https://www.dev.rofilm-media.net/node/385
to "In the World of Grains" part 5: https://www.dev.rofilm-media.net/node/390
to "In the World of Grains" part 6: https://www.dev.rofilm-media.net/node/398
to "In the World of Grains" part 7: https://www.dev.rofilm-media.net/node/407
to "In the World of Grains" part 8: https://www.dev.rofilm-media.net/node/414
to "in the World of Grains" part 9: https://www.dev.rofilm-media.net/node/421