In the background I have worked on Baken (beacon), which will be a permanent light installation mounted on the new building of the artist incubator space Bajesdorp. This is still a work in progress as I am gathering the financial support to realise this project. However, I have worked on prototypes and first explorations with this new kind of light instrument.
The biggest achievement of 2023 is the completion of the new Bajesdorp: a housing cooperative and artist incubator, that over 2023 has grown from its foundations into a finished and delivered building of 20 meters high. So at the end of 2023 and beginning of 2024, I have been busy with finalising the inside of my new home and atelier.
We have founded a new association in June 2023, Begane Grond Bajesdorp, that will run the public spaces on the ground floor. A crowdfunding campaign has just started in February to kickstart GROND.
In 2024, I will be working one or two new works with kites - so stay tuned for that!
]]>photo by Felicity van Oort
Also in 2022 I had the opportunity to return to my kite projects and developed a new version of V.L.I.G. adapting the work to a video installation with footage recorded at the beach of Tainan. That work is still in exhibition at Siao-Long Cultural Park. In addition, I developed a workshop on creating musical instruments from kites, and presented the performance Wind Instrument again.
In 2023, I will focus on the work Baken (beacon), which will be a permanent light installation mounted on the new building of the artist incubator space Bajesdorp. The light installation will be driven by locally measured environmental data and will over time make visible the change in the local climate. This work brings many new challenges with it, ranging from making a weatherproof, durable installation, composing for multicolor lights, and working with a combination of realtime and prerecorded data from a database, to find relations between the now, yesterday, and the further past. I am currently working on creating a team to work with, funding applications to be able to realise and building prototypes to try things out.
virtual model of the light art work in Blender (Luuk Meints)
At the same time, I am involved in the new project of Frouke Wiarda, The Turbine Plays. This project deals with the energy transition, mainly focusing on wind mills. Some further concepts for new works in a series of “Beacons of Transition” are in my head, and from the research with the work I will see how these ideas will develop.
Some time this year, a new edition of “The SuperCollider Book” will also come out with MIT Press, featuring updated versions of two chapters: “Ins and Outs: SuperCollider and External Devices” and “Spatialization with SuperCollider” that I (co-)authored for the book.
I also have taken up the opportunity, to clean up my website a little bit - at the top, you now only see a few highlighted projects, and if you click on the header or the menu item at the top, you will get to the full list. This will hopefully make it a bit easier to see what I am currently working on, while still preserving the overview of all the projects I have done over the past 10 years or so.
]]>In this post, I will describe how the sound and light composition of N-Polytope is structured. For this I will start with the physical components, describe the sound synthesis and light synthesis algorithms, and then go to the bigger picture of the composition with these instruments and how the machine learning algorithms are used.
N-Polytope is a re-imagination of Xenakis’s Polytope works. For this work we were inspired to use similar steel cable structures that look like curved planes, folded into each other. For each location where N-Polytope is presented, the steel cable structure is adapted to the space were the work is exhibited and we try to create a connection with the architecture of the space.
On the steel cable structures modules are mounted that generate light and sound and also measure light and sound. The measurements are sent to a central computer which runs various machine learning algorithms over the course of the 14-minute composition. The timeline of the composition is determined by an additional ‘fixed media’ soundtrack composed by Chris Salter and Adam Basanta. This soundtrack is played over 4 broad range speakers situated around the space and 2 subwoofers.
The modules on the steel cable structures consist of
The Atmega328p is the core of the module and handles wireless communication (via the XBee) with the main computer, measurements of the LDR’s, amplitude tracking of the microphone, communication with the ATTiny841 to control the sound synthesis, and pulse width modulation patterns for the LEDs.
The ATTiny841 is programmed with a fixed sound synthesis patch of three wavetable oscillators, where one oscillator controls the amplitude of the third one, and another the phase of the third one. Then after the third oscillator there is a 2nd order filter. Then the result is sent to a DAC, connected to a small amplifier and the speaker of the module.
Each oscillator has parameters for
For the waveform, only the third oscillator can be noise generator (not a wavetable then), and only the first two oscillators can be DC (so not a waveform, but a fixed value).
The synthesizer can be triggered with a wireless message from the computer.
For the frequency and duration parameters you can set a base value and a range within which a random value is chosen upon triggering the synthesis which is added to the base value. Then there are three modes for using the random range: no randomness (so just use the base value for the parameter), setting the randomvalue once when the setting is sent to the synthesizer, and choosing a random value each time the synthesizer is triggered.
For controlling the lights also a synthesis approach is chosen: the computer just sends parameters for the light pattern and then sends triggers to start the sequence.
For each LED there is an oscillator with
Also here the frequency and duration can be set with a base value and a range for a random value to be added, with the same random modes as for the sound synthesis.
The microcontroller is also reading out the microphone data at audio rate and doing a very simple envelope following on the measured value.
And the microcontroller is reading out the three LDR’s at a lower rate (1/6th of the audio rate): the microcontroller switches between reading the microphone signal and one LDR: so a sequence of microphone - LDR 1 - microphone - LDR 2 - microphone - LDR 3, and so on. This sequence ensure that the microphone is read out at a constant sample interval. For the LDR’s the speed is not so important as the light levels change on a much slower time scale.
Finally, the microcontroller handles the communication with the main computer: it sends out the sensor data at a regular interval, and continuously listens for incoming messages that set the paramaters for the light and sound synthesis or trigger the LEDs or sound.
Also the sound synthesis microcontroller reports its calculated amplitude, which is send along with the sensor data. Reading out this communication also checks that the ATTiny841 is still up and running. If there is no communication for a while, the Atmega328p will reset the ATTiny841.
In making these modules, there were a number of considerations.
On the microcontroller I had to negotiate tradeoffs between the processing power of the microcontroller and its memory size, the bandwidth for the wireless communication, and having a flexible system for experimentation and composition. Changing and updating the firmware is a tedious process that involves opening all the housings, taking out the microcontroller board, uploading the firmware, and putting the board back. Not a quick task with around 50 units, mounted on steel cables and for which a ladder or something else may be needed to reach the unit.
We wanted to have a system with many modules - creating actual sound sources that are spread out over the space, rather than using virtual sound sources in multi-speaker sound spatialisation setups. So we chose to do the sound synthesis locally and send over parameters: hence no need for complex cabling and sound cards with many channels. At the same time, of course having a fixed sound synthesis algorithm limits the amount of possible sounds, but we found with the designed algorithm we could reach a large number of different types of sounds: from very resonant ‘rinkling’ sounds to noisy snoring like sounds. And even though each individual module only has a 1W amplifier with a small speaker, all together the system can reach a good level of loudness.
The addition of the randomness to the frequency and duration of the sounds is a way of implementing variation in the sounds: it allows us to use the same preset of parameters for all modules, while still having a variation of sounds. The base value plus a random value within a certain range is also akin to Xenakis’ concept of tendency masks - ranges within which parameters of sound vary over the course of time.
Also using a synthesis approach for the light is remeniscent of Xenakis’ concept of viewing light as something that happens over time: the light synthesis parameters describe how the light behaves within a short time window from its triggering.
The modules implement the possible micro-behaviours for the light and sound. The presets for these are then sent from the central computer. This leaves the freedom to try out new presets when the system is set up in a space, without having to reprogram the modules and at the same time it limits the amount of communication needed to the module (since wireless bandwidth is limited and wired communication would require a more complex setup and a lot more cables).
As mentioned above the compositional structure then consists of a fixed media soundtrack and a score linked to it, that starts and stops:
While composing the work, we (Sofian Audry, Chris Salter and myself) discussed the type of algorithms we would use: Sofian would program the algorithms using his machine learning library Qualia and I would choose what type of presets for the microbehaviour of the light and sound would fit to the algorithm and with the soundtrack. Then we would look at the behaviour of the algorithm and tune the parameters for the machine learning algorithm and the presets. The score also determines which modules are active at a given time: so we make interventions on where the machine learning algorithms are active and where not to create a spatial dramaturgy for the work.
The machine learning algorithms take different types of inputs: we have sensor data from the structure available: light and sound levels, and for some of the algorithms we use different metrics as well.
Some of the algorithms (booster and chaser) are based on reinforcement learning: the algorithm makes an observation (e.g. of the light and sound levels), determines an action to take, and then gets a reward which is calculated from a subsequent observation. The reward is simply a function of the values given in the observation: a mathematical formula. The algorithm then attempts to get the largest reward possible over time. Then it also has a parameter for how ‘curious’ or ‘exploratory’ it is to try out something new: take a complete different action in the hopes that the chosen action yields an even higher reward than it got with previous actions.
For all of the algorithms we are sending a trigger signal to calculate the next step for the calculation. This means that we can vary the speed of the algorithms.
Firefly
In the firefly algorithm each LED has three states: it can be flashing for a certain amount of time and after this time it will be ‘blind’ for a while: it will ignore its environment. If it is not flashing or blind, the LED will measure the amount of incoming light and compare that with the average amount of light over the past time (calculated from a moving average). When above the threshold, the a power variable will be increased and the firefly will be blind for a while.
Then the power variable is compared to a second threshold, and if it exceeds that threshold it will start flashing. If it has flashed for the set flash time, it will reset the blind time and the power variable and go back to the idle state.
Drunk
The drunk algorithm is a kind of random walk: the value for the LED intensity is a weighted sum of a random walk of a parameter for the overall grid, for each line, for each node and the individual LED.
For each parameter at each update a small random amount is added or substracted. Then the total value is calculated by adding the values up with a certain weight for how much influence each of the parameters has. If the weight for the overall grid would be 1 and 0 for the other parameters, all LEDs would do the same. So the weights determine the amount of variation between the individual LEDs: from all the same to all completely different.
This algorithm uses no other inputs than the weights for the four parameters. These weights are changed over the course of the section.
Booster
The booster algorithm has as many agents are there are modules. The agents have a neural network with 5 inputs, 8 hidden layers and 1 output. It then uses an epsilon-decreasing policy for learning, which means that at the start the agent is more exploratory and later on the agent is more greedy.
The inputs to the network are the measured light value, the moving average of the light value, a parameter energy and a timer. The energy is a slowly increasing value with time, and after a certain amount of time after emitting light, the energy increases also based on the light input. The reward is calculated from the emitted led values, with some linearity built in, that boosts the ‘right’ decisions and punishes ‘bad bursts’.
When the agent decides to emit light, the energy and the timer are reset to 0.
Chasers
In the chasers algorithm the spatial layout of the structure is taken into account. Each line is regarded as a one-dimensional space where a chaser can move around. While moving the chaser is rewarded for touching another chaser (be on the same position), moving, or staying in the same position. The position on the line is defined as the position of an LED, so if there are three modules with each three LEDs mounted on a steel cable, that line has 9 possible positions for a chaser to be. The position of the chaser is visualised by triggering the LED and auralised by triggering the sound of the module the LED belongs to. If the reward for the chaser is larger than 0.5 (the range is between 0 and 1), the parameter repeat is set for the light preset.
In the score, more and more chasers are added to each line. Initially the light preset is a short flash. Later on in the chaser section, the light preset is changed to a longer flickering light, creating more overall brightness in the space.
Note that in this algorithm the only input of the algorithm is the position of the other chasers - no sensor data is used.
Genetic algorithm
The genetic algorithm was added in 2017 for the exhibition at the MAC in Montreal. The idea was that instead of setting the parameters for the presets of the light and sound synthesis directly, a genetic algorithm would try to mutate from its current state to the new target state of the parameters. In this way the algorithm attempts to replicate the ‘normal’ score, but takes a bit of time before it has found the right parameter set.
The parameter set is interpreted as a string of bits (1’s and 0’s), a binary chromosome. At each step of calculation of the algorithm, it selects two members of the population, ‘mates’ them to create two new ‘children’ and selects the fittest of the two offspring, i.e. the one closest to the new target preset.
We created two instances of this algorithm: one with a population of the size of the number of nodes to approximate the sound presets, and one with a population the size of the number of LEDs to approximate the light presets.
The ‘genetic’ mode we thus created was then added as an alternative ‘ambient’ mode to play along the ambient soundtrack.
]]>Read the interview here: https://wonomute.no/interviews/marije-baalman/
]]>This work is a sequel to Wezen - Gewording. The sound instruments that I use are similar to ones I developed for that piece. The code base and the processing algorithms have been redeveloped for this new piece.
In the work I am using inspiration from the work of different artists, whose work I studied as part of writing the book Just a question of mapping:
In Wezen - Handeling there are two different layers of sound:
The controllers that I use are open gloves with five buttons each (one for each finger) and a nine-degrees-of-freedom sensor. The data from these gloves is transmitted via a wireless connection to the computer, where it is translated to Open Sound Control data and received by SuperCollider. The core hardware for the gloves is the Sense/Stage MiniBee.
The gloves were orginally designed and created for a different performance, Wezen - Gewording, where the openness of the glove was a necessity, since I also needed to be able to type to livecode. The nine-degrees-of-freedom sensor was added to this glove later: in Wezen - Gewording I only made use of the built-in 3-axis accelerometer of the Sense/Stage MiniBee.
Of the right controller the gyroscope is not working, so effectively, that is a six-degrees-of-freedom sensor. During the development of the work, I discovered this defect, and it motivated my choice to only use flicks in the left hand, and use the right hand for continuous control.
To preprocess the data from the gyroscope, accelerometer and magnetometer, I first split the data into these three different streams (each having 3 axes), and do a range mapping with a bipolar spec: that is I am using a non-linear mapping from the integer input data range, to a range between -1 and 1, in such a way that the middle point is at 0, and the resulting range feels linear with regard to the movements I make.
The for each of the three, I apply a exponential smoothing filter. In certain cases I use the difference output of this filter: the raw input value minus the smoothed filter output. This difference output is then processed with another exponential smoothing filter. In some cases this second filter has another parameter for the signal becoming larger, versus the signal becoming smaller.
To start and stop sounds, I am using a combination of flick detection and pose detection.
The pose selects the instrument that will be triggered. The pose detection algorithm makes use of prerecorded data of three different poses. The continuous smoothed data from the accelerometer and magnetometer is compared with the mean of the prerecorded data to determine whether the current value is within an factor of measured variation of the prerecorded samples. If that is the case, the pose is matched.
The flick detection makes use of the difference of the gyroscope data with its smoothed version. If this difference crosses a certain threshold, an action is performed. Different thresholds can be set for different axes and different directions of the data. The detected pose is used as additional data to determine the action to trigger.
To avoid that movement to flick affects the pose detection, the pose detection is only enabled when the summed smoothed difference signal of the gyroscope data is below a certain value.
Instruments are started, or launched, with flicks of the left hand when the palm of the hand is moved downwards (if you look at the arm held horizontally forwards with the back of the hand pointing up). Other flicks (upward, and those rolling the wrist to left or right) stop the instrument again.
The right hand data is used to control the different parameters of the sound:
The input for control are both the smoothed and the difference with the smoothed data from the accelerometer and the magnetometer.
For each instrument there are differences in how exactly the different parameters affect the sound. Some concepts that prevail are:
The buttons on both gloves are used to control the Lick Machine.
While the thumb button on the right hand glove is pressed, the data that is used for instrument control and the output data from the flick detector and the pose detector (at the moment of flicking) is recorded into a lick buffer. This happens as long as the button is pressed. At a button press, the buffer is first cleared and then recorded into. The data is always recorded into the current lick buffer.
In total there are 16 lick buffers. With the ring and pink buttons of the right glove the current lick buffer can be changed.
When the thumb button on the left hand glove is pressed flicks with the left hand trigger playback of licks, rather than instruments. The pose and flick axis and direction determines which lick will be playing back. When also the index button of the left glove is pressed, the lick will play in looped mode, otherwise it will just play once and stop.
The playback of the licks can then be modulated:
When these three left glove buttons are pressed simultaneously, all three parameters are modulated at the same time.
The modulation only takes effect on the currently selected lick. If both ring and pink buttons of the right glove are pressed, the modulation affects all licks simultaneously.
Flowchart images created with the Mermaid live editor
]]>In the interview I’m reflecting on my history with livecoding, discuss code as an interface, and the various projects that relate to livecoding.
]]>And it seems out of the work on the book, at least one new performance will evolve: a lecture performance on mapping. At the Faculty of Senses presentation in Copenhagen I performed a first version of this, and it’s likely that this will evolve further.
]]>Tinnitus: A work by Marije Baalman and Mariagiovanna Nuzzi
When it comes to hearing disabilities, Tinnitus might be one of the most common. In the Netherlands alone, an estimated 2 million people suffer from Tinnitus. It is a form of hearing damage, in which you continuously experience sound, even when there is no external sound. Tinnitus has many different causes, but the most common cause is (over)exposure to loud noise.
The tone people hear can vary from beeping sounds and flute tones to whizzing and crackling. Every form of Tinnitus is unique. People who have Tinnitus experience the sound they hear non-stop. In most cases, the sound is not only non-stop, but also predominant, even in the presence of external sound. As a side effect, people with Tinnitus often suffer from psychological issues as well.
In 2011, Marije Baalman and Mariagiovanna Nuzzi created a project called Tinnitus, or My little acufene. Baalman describes the work as a half an hour composition exploring the sound world of people with hearing disorders. The sound design in the piece is based on an interview with someone who suffers from tinnitus, describing the sound to Baalman and comparing it to the synthesized sound that Baalman has created. The person also discusses the social and psychological consequences of suffering from hearing loss and hearing a ringing sound all the time.
The public can hear the sound, but the interviewed person can only partly hear the sound. This is due to the fact that Tinnitus is a hearing perception produced by the brain itself in order to compensate for the damaged part, which is the sound frequency the sufferer misses.
Tinnitus is often a lonely disease, because one cannot see that someone suffers from it on the outside. What makes it even more difficult for sufferers, is that Tinnitus is uncurable. People with Tinnitus have to deal with hearing sound all the time. This is why Baalman’s work is an interesting piece: it gives the listener a look into the world of someone who suffers from Tinnitus, a glimpse into what it is like to have this kind of hearing damage.
]]>