<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.3.1">Jekyll</generator><link href="http://localhost:4000/feed.xml" rel="self" type="application/atom+xml" /><link href="http://localhost:4000/" rel="alternate" type="text/html" /><updated>2026-03-31T19:49:44+02:00</updated><id>http://localhost:4000/feed.xml</id><title type="html">Marije Baalman</title><subtitle>artist and researcher/developer based in Amsterdam - working with sound, light and interaction</subtitle><entry><title type="html">The SuperCollider Book - 2nd edition</title><link href="http://localhost:4000/2025/04/25/supercolliderbook-v2.html" rel="alternate" type="text/html" title="The SuperCollider Book - 2nd edition" /><published>2025-04-25T00:00:00+02:00</published><updated>2025-04-25T00:00:00+02:00</updated><id>http://localhost:4000/2025/04/25/supercolliderbook-v2</id><content type="html" xml:base="http://localhost:4000/2025/04/25/supercolliderbook-v2.html"><![CDATA[<p>The second edition of the SuperCollider Book is published on April 29th, 2025, by MIT Press.</p>

<p>This edition features updates to two chapters I had also contributed to the first edition: <em>Ins and Outs: SuperCollider and External Devices</em> and <em>Spatialization with SuperCollider</em>.</p>

<p>And of course many more updates and new chapters by all the other contributors to the book!</p>

<p>Thanks to the editors Scott Wilson, David Cottle and Nick Collins.</p>

<!--more-->

<p><strong>The SuperCollider book</strong></p>

<p><em>published by MIT Press</em></p>

<p>Description from MIT:</p>

<p>A comprehensive update of the essential reference to SuperCollider, with new material on machine learning, musical notation and score making, SC Tweets, alternative editors, parasite languages, non-standard synthesis, and the cross-platform GUI library.</p>

<p>SuperCollider is one of the most important domain-specific audio programming languages, with wide-ranging applications across installations, real-time interaction, electroacoustic pieces, generative music, and audiovisuals. Now in a comprehensively updated new edition, The SuperCollider Book remains the essential reference for beginners and advanced users alike, offering students and professionals a user-friendly guide to the language’s design, syntax, and use. Coverage encompasses the basics as well as explorations of advanced and cutting-edge topics including microsound, sonification, spatialization, non-standard synthesis, and machine learning.</p>

<p>Second edition highlights:</p>

<ul>
  <li>New chapters on musical notation and score making, machine learning, SC Tweets, alternative editors, parasite languages, non-standard synthesis, SuperCollider on small computers, and the cross-platform GUI library</li>
  <li>New tutorial on installing, setting up, and running the SuperCollider IDE</li>
  <li>Technical documentation of implementation and information on writing your own unit generators</li>
  <li>Diverse artist statements from international musicians</li>
  <li>Accompanying code examples and extension libraries</li>
</ul>]]></content><author><name></name></author><category term="sound" /><category term="spatial" /><category term="interaction" /><summary type="html"><![CDATA[The second edition of the SuperCollider Book is published on April 29th, 2025, by MIT Press. This edition features updates to two chapters I had also contributed to the first edition: Ins and Outs: SuperCollider and External Devices and Spatialization with SuperCollider. And of course many more updates and new chapters by all the other contributors to the book! Thanks to the editors Scott Wilson, David Cottle and Nick Collins.]]></summary></entry><entry><title type="html">Mention in the Volkskrant in Rewire review</title><link href="http://localhost:4000/2025/04/08/npolytope-in-volkskrant.html" rel="alternate" type="text/html" title="Mention in the Volkskrant in Rewire review" /><published>2025-04-08T00:00:00+02:00</published><updated>2025-04-08T00:00:00+02:00</updated><id>http://localhost:4000/2025/04/08/npolytope-in-volkskrant</id><content type="html" xml:base="http://localhost:4000/2025/04/08/npolytope-in-volkskrant.html"><![CDATA[<p>Dutch national newpaper the <a href="https://www.volkskrant.nl/muziek/rewire-festival-maakt-op-artistiek-gebied-iedere-verwachting-waar-maar-de-organisatie-laat-te-wensen-over~b9d045f7/">Volkskrant reviewed the Rewire festival</a> and has much praise for <em>N-Polytope</em>, while being quite critical of the organisation of the Rewire festival:</p>

<p>Translation by me (original Dutch below):</p>

<p><em>The lucky ones to obtain a ticket [to the festival], get their wristbands in the foyer of the concert hall Amare. Behind the cash counters the first disappointment is found. And no, not artistically. The show N-Polytope by Marije Baalman and Chris Salter is a multimedia spectacle that buzzes around you from four speakers.</em></p>

<p><em>The artwork seems to be electrified: music tantalizes the ears, on highly tensioned wires led lamps are flickering. With N-Polytope Baalman and Salter want to reflect on early electronic work of the Greek composer Iannis Xenakis. That is not a misplaced bluff: their piece is a sparkling answer to the cosmic works that Xenakis composed in the 1970s with his self-invented computer program UPIC.</em></p>

<p><em>But why N-Polytope is presented on the staircase, right in front of the entrance, is a mystery. If one work needs a quiet, dark room, this one does. In broad daylight people are walking around, and chats and hugs are exchanged. It is like playing mikado in stormy weather: no matter how powerful N-Polytope is, little remains.</em></p>

<p>Choosing the location was weighing the pros and cons: yes, for attention of audience to experience the light and sound in its full depth, a fully dark space would have been preferable. But then the piece would have been set up in a blackbox space, have only been accessible for three days during the festival and have only attracted people who would choose to go to this blackbox. The architecture of the work would have had little interaction with the architecture of the venue it was presented in. Instead we chose for a location where a lot of people would encounter the work, also people who would not normally go and visit a work like this. Also the interaction between the architecture around the staircase in Amare and the architecture of the work have a nice interaction now. And as a plus: the work stays exhibited for a total of two months, making the time, effort and money invested in realising the show more than worthwhile.</p>

<!--more-->

<p><img src="/images/npolytope/npolytope-amare-1.jpg" alt="" /></p>

<p>Original Dutch text:</p>

<p><em>De geluksvogels die een ticket hebben bemachtigd, halen hun bandjes op in de foyer van concertgebouw Amare. Achter de kassa’s vindt de eerste teleurstelling plaats. En nee, niet op artistiek gebied. De voorstelling N-Polytope van Marije Baalman en Chris Salter is een multimediaspektakel dat uit vier speakers om je heen gonst.</em></p>

<p><em>Het kunstwerk lijkt onder stroom te staan: muziek tintelt de oren, aan hooggespannen kabels knipperen ledlampen. Baalman en Salter willen met N-Polytope reflecteren op vroeg elektronisch werk van de Griekse componist Iannis Xenakis. Dat is geen misplaatste bluf: hun stuk is een sprankelend antwoord op de kosmische werken die Xenakis in de jaren zeventig met zijn zelfbedachte computerprogramma UPIC componeerde.</em></p>

<p><em>Maar waarom N-Polytope op de trap, recht voor de ingang, wordt gespeeld, is een raadsel. Als er één werk is dat een stille, donkere ruimte nodig heeft, is dit het wel. In het volle daglicht wandelen mensen rond, er worden praatjes en knuffels uitgewisseld. Het is als mikado spelen bij windkracht 12: hoe krachtig N-Polytope ook is, er blijft weinig van overeind.</em></p>

<ul>
  <li>“Rewire Festival maakt op artistiek gebied iedere verwachting waar, maar: de organisatie laat te wensen over”, Dennis Bajram, Volkskrant, April 7, 2025</li>
</ul>]]></content><author><name></name></author><category term="light" /><category term="sound" /><category term="installation" /><category term="spatial" /><category term="architecture" /><summary type="html"><![CDATA[Dutch national newpaper the Volkskrant reviewed the Rewire festival and has much praise for N-Polytope, while being quite critical of the organisation of the Rewire festival: Translation by me (original Dutch below): The lucky ones to obtain a ticket [to the festival], get their wristbands in the foyer of the concert hall Amare. Behind the cash counters the first disappointment is found. And no, not artistically. The show N-Polytope by Marije Baalman and Chris Salter is a multimedia spectacle that buzzes around you from four speakers. The artwork seems to be electrified: music tantalizes the ears, on highly tensioned wires led lamps are flickering. With N-Polytope Baalman and Salter want to reflect on early electronic work of the Greek composer Iannis Xenakis. That is not a misplaced bluff: their piece is a sparkling answer to the cosmic works that Xenakis composed in the 1970s with his self-invented computer program UPIC. But why N-Polytope is presented on the staircase, right in front of the entrance, is a mystery. If one work needs a quiet, dark room, this one does. In broad daylight people are walking around, and chats and hugs are exchanged. It is like playing mikado in stormy weather: no matter how powerful N-Polytope is, little remains. Choosing the location was weighing the pros and cons: yes, for attention of audience to experience the light and sound in its full depth, a fully dark space would have been preferable. But then the piece would have been set up in a blackbox space, have only been accessible for three days during the festival and have only attracted people who would choose to go to this blackbox. The architecture of the work would have had little interaction with the architecture of the venue it was presented in. Instead we chose for a location where a lot of people would encounter the work, also people who would not normally go and visit a work like this. Also the interaction between the architecture around the staircase in Amare and the architecture of the work have a nice interaction now. And as a plus: the work stays exhibited for a total of two months, making the time, effort and money invested in realising the show more than worthwhile.]]></summary></entry><entry><title type="html">Video of Intricate Interplays</title><link href="http://localhost:4000/2025/03/01/intricate-interplays.html" rel="alternate" type="text/html" title="Video of Intricate Interplays" /><published>2025-03-01T00:00:00+01:00</published><updated>2025-03-01T00:00:00+01:00</updated><id>http://localhost:4000/2025/03/01/intricate-interplays</id><content type="html" xml:base="http://localhost:4000/2025/03/01/intricate-interplays.html"><![CDATA[<p>Video of Intricate Interplays by Tanja Busking, created by iii.</p>

<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1062388308?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Intricate Interplays (w/ Korzo)"></iframe></div>
<script src="https://player.vimeo.com/api/player.js"></script>

<!--more-->

<p>November 30th, I took part in a joint performance with various artists from iii (Mariska de Groot, Dieter Vandoren, and Mihalis Shammas) and Ludmila Rodrigues at Korzo theater in Den Haag.</p>

<p>In the performance I particated with two new instruments: <a href="/projects/dynamic-light-pattern-studies">light plates</a> and an instrument made of paper and rotan controlled by wind. This last instrument is a first attempt to play with wind indoors.</p>]]></content><author><name></name></author><category term="light" /><category term="wind" /><category term="performance" /><summary type="html"><![CDATA[Video of Intricate Interplays by Tanja Busking, created by iii.]]></summary></entry><entry><title type="html">Documentation of Intricate Interplays</title><link href="http://localhost:4000/2024/12/01/intricate-interplays.html" rel="alternate" type="text/html" title="Documentation of Intricate Interplays" /><published>2024-12-01T00:00:00+01:00</published><updated>2024-12-01T00:00:00+01:00</updated><id>http://localhost:4000/2024/12/01/intricate-interplays</id><content type="html" xml:base="http://localhost:4000/2024/12/01/intricate-interplays.html"><![CDATA[<p><img src="/images/intricate-interplays/wind-intricate-interplays_DSCF4787-320x320.jpg" alt="" /><img src="/images/intricate-interplays/lightplate-intricate-interplays_DSCF4738-320x320.jpg" alt="" /></p>

<p>November 30th, I took part in a joint performance with various artists from iii (Mariska de Groot, Dieter Vandoren, and Mihalis Shammas) and Ludmila Rodrigues at Korzo theater in Den Haag.</p>

<p>In the performance I particated with two new instruments: <a href="/projects/dynamic-light-pattern-studies">light plates</a> and an instrument made of paper and rotan controlled by wind. This last instrument is a first attempt to play with wind indoors.</p>

<p><em>photos by Davide Sartori, Intricate Interplays, iii @ Korzo</em></p>

<!--more-->

<h1 id="light-plates">Light plates</h1>

<p><img src="/images/intricate-interplays/lightplate-intricate-interplays_DSCF4738.jpg" alt="" />
<img src="/images/intricate-interplays/lightplate-intricate-interplays_DSCF4744.jpg" alt="" /></p>

<h1 id="wind">Wind</h1>

<p><img src="/images/intricate-interplays/wind-intricate-interplays_DSCF4783.jpg" alt="" />
<img src="/images/intricate-interplays/wind-intricate-interplays_DSCF4787.jpg" alt="" />
<img src="/images/intricate-interplays/wind-intricate-interplays_DSCF4796.jpg" alt="" />
<img src="/images/intricate-interplays/wind_intricate-interplays_DSCF4782.jpg" alt="" /></p>]]></content><author><name></name></author><category term="light" /><category term="wind" /><category term="performance" /><summary type="html"><![CDATA[November 30th, I took part in a joint performance with various artists from iii (Mariska de Groot, Dieter Vandoren, and Mihalis Shammas) and Ludmila Rodrigues at Korzo theater in Den Haag. In the performance I particated with two new instruments: light plates and an instrument made of paper and rotan controlled by wind. This last instrument is a first attempt to play with wind indoors. photos by Davide Sartori, Intricate Interplays, iii @ Korzo]]></summary></entry><entry><title type="html">Looking back at 2023 and outlook into 2024</title><link href="http://localhost:4000/2024/01/08/start-of-2024.html" rel="alternate" type="text/html" title="Looking back at 2023 and outlook into 2024" /><published>2024-01-08T00:00:00+01:00</published><updated>2024-01-08T00:00:00+01:00</updated><id>http://localhost:4000/2024/01/08/start-of-2024</id><content type="html" xml:base="http://localhost:4000/2024/01/08/start-of-2024.html"><![CDATA[<p>In 2023, I have been doing various presentations at different occasions - a highlight being the <a href="https://www.youtube.com/watch?v=_Z71KQtWpMk&amp;t=9940s">keynote presentation at the International Livecoding Conference</a> in Utrecht.</p>

<p>In the background I have worked on <a href="/projects/baken"><em>Baken</em> (beacon)</a>, which will be a permanent light installation mounted on the new building of the artist incubator space <a href="https://bajesdorp.nl">Bajesdorp</a>. This is still a work in progress as I am gathering the financial support to realise this project. However, I have worked on prototypes and first explorations with this new kind of light instrument.</p>

<p>The biggest achievement of 2023 is the completion of the new Bajesdorp: a housing cooperative and artist incubator, that over 2023 has grown from its foundations into a finished and delivered building of 20 meters high. So at the end of 2023 and beginning of 2024, I have been busy with finalising the inside of my new home and atelier.</p>

<p>We have founded a new association in June 2023, Begane Grond Bajesdorp, that will run the public spaces on the ground floor. A <a href="https://www.voordekunst.nl/projecten/16406-help-grond-van-de-grondgrond-needs-you">crowdfunding campaign</a> has just started in February to kickstart <a href="https://grond.community">GROND</a>.</p>

<p>In 2024, I will be working one or two new works with kites - so stay tuned for that!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[In 2023, I have been doing various presentations at different occasions - a highlight being the keynote presentation at the International Livecoding Conference in Utrecht. In the background I have worked on Baken (beacon), which will be a permanent light installation mounted on the new building of the artist incubator space Bajesdorp. This is still a work in progress as I am gathering the financial support to realise this project. However, I have worked on prototypes and first explorations with this new kind of light instrument. The biggest achievement of 2023 is the completion of the new Bajesdorp: a housing cooperative and artist incubator, that over 2023 has grown from its foundations into a finished and delivered building of 20 meters high. So at the end of 2023 and beginning of 2024, I have been busy with finalising the inside of my new home and atelier. We have founded a new association in June 2023, Begane Grond Bajesdorp, that will run the public spaces on the ground floor. A crowdfunding campaign has just started in February to kickstart GROND. In 2024, I will be working one or two new works with kites - so stay tuned for that!]]></summary></entry><entry><title type="html">Looking back at 2022 and outlook into 2023</title><link href="http://localhost:4000/2023/01/20/start-of-2023.html" rel="alternate" type="text/html" title="Looking back at 2022 and outlook into 2023" /><published>2023-01-20T00:00:00+01:00</published><updated>2023-01-20T00:00:00+01:00</updated><id>http://localhost:4000/2023/01/20/start-of-2023</id><content type="html" xml:base="http://localhost:4000/2023/01/20/start-of-2023.html"><![CDATA[<p>Last year, I published my book <a href="/projects/composing-interactions">“Composing Interactions”</a> after many years of researching, writing, creating diagrams and perfecting the layout. I’m very happy with all the positive reactions that I am getting on the book now that it is out into the world.</p>

<p><img src="/images/composing-interactions/STUKKDESIGN - Composing Interactions20180830_021.jpg" alt="" /><em>photo by Felicity van Oort</em></p>

<p>Also in 2022 I had the opportunity to return to my kite projects and developed a new version of <a href="/projects/v-l-i-g"><em>V.L.I.G.</em></a> adapting the work to a video installation with footage recorded at the beach of Tainan. That work is still in exhibition at Siao-Long Cultural Park. In addition, I developed a workshop on creating musical instruments from kites, and presented the performance <a href="/projects/wind-instruments"><em>Wind Instrument</em></a> again.</p>

<!--more-->

<p><img src="/images/vlig/baalman-vlig-sails-sugar-silicon-4.jpg" alt="" /></p>

<p>In 2023, I will focus on the work <a href="/projects/baken"><em>Baken</em> (beacon)</a>, which will be a permanent light installation mounted on the new building of the artist incubator space <a href="https://bajesdorp.nl">Bajesdorp</a>. The light installation will be driven by locally measured environmental data and will over time make visible the change in the local climate. This work brings many new challenges with it, ranging from making a weatherproof, durable installation, composing for multicolor lights, and working with a combination of realtime and prerecorded data from a database, to find relations between the now, yesterday, and the further past. I am currently working on creating a team to work with, funding applications to be able to realise and building prototypes to try things out.</p>

<p><img src="/images/baken/Baken-AM137_Renders20230118_08-1000.png" alt="" /><em>virtual model of the light art work in Blender (Luuk Meints)</em></p>

<p>At the same time, I am involved in the new project of Frouke Wiarda, <a href="https://theturbineplays.com">The Turbine Plays</a>. This project deals with the energy transition, mainly focusing on wind mills. Some further concepts for new works in a series of “Beacons of Transition” are in my head, and from the research with the work I will see how these ideas will develop.</p>

<p>Some time this year, a new edition of “The SuperCollider Book” will also come out with MIT Press, featuring updated versions of two chapters: “Ins and Outs: SuperCollider and External Devices” and “Spatialization with SuperCollider” that I (co-)authored for the book.</p>

<p>I also have taken up the opportunity, to clean up my website a little bit - at the top, you now only see a few highlighted projects, and if you click on the header or the menu item at the top, you will get to the full list. This will hopefully make it a bit easier to see what I am currently working on, while still preserving the overview of all the projects I have done over the past 10 years or so.</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Last year, I published my book “Composing Interactions” after many years of researching, writing, creating diagrams and perfecting the layout. I’m very happy with all the positive reactions that I am getting on the book now that it is out into the world. photo by Felicity van Oort Also in 2022 I had the opportunity to return to my kite projects and developed a new version of V.L.I.G. adapting the work to a video installation with footage recorded at the beach of Tainan. That work is still in exhibition at Siao-Long Cultural Park. In addition, I developed a workshop on creating musical instruments from kites, and presented the performance Wind Instrument again.]]></summary></entry><entry><title type="html">Presales of Composing Interactions</title><link href="http://localhost:4000/2022/03/20/presales-of-composing-interactions.html" rel="alternate" type="text/html" title="Presales of Composing Interactions" /><published>2022-03-20T00:00:00+01:00</published><updated>2022-03-20T00:00:00+01:00</updated><id>http://localhost:4000/2022/03/20/presales-of-composing-interactions</id><content type="html" xml:base="http://localhost:4000/2022/03/20/presales-of-composing-interactions.html"><![CDATA[<p>After many years of writing, the publication date of my book <a href="https://composinginteractions.art"><strong>Composing Interactions</strong> - <em>An Artist’s Guide to Composing Interactions</em></a> is now finally in sight, and set at June 1st, 2022. The presales have started via the publisher <a href="https://v2.nl/publishing/composing-interactions">V2_</a>.</p>]]></content><author><name></name></author><category term="book" /><summary type="html"><![CDATA[After many years of writing, the publication date of my book Composing Interactions - An Artist’s Guide to Composing Interactions is now finally in sight, and set at June 1st, 2022. The presales have started via the publisher V2_.]]></summary></entry><entry><title type="html">Behaviours of light and sound</title><link href="http://localhost:4000/2021/09/25/behaviours-of-light-and-sound.html" rel="alternate" type="text/html" title="Behaviours of light and sound" /><published>2021-09-25T00:00:00+02:00</published><updated>2021-09-25T00:00:00+02:00</updated><id>http://localhost:4000/2021/09/25/behaviours-of-light-and-sound</id><content type="html" xml:base="http://localhost:4000/2021/09/25/behaviours-of-light-and-sound.html"><![CDATA[<p><em>This post was originally written in 2018, but then not published on the website yet. I’m happy to post it now, finally!</em></p>

<p><em>In this post, I will describe how the sound and light composition of N-Polytope is structured. For this I will start with the physical components, describe the sound synthesis and light synthesis algorithms, and then go to the bigger picture of the composition with these instruments and how the machine learning algorithms are used.</em></p>

<!--more-->

<p>N-Polytope is a re-imagination of Xenakis’s Polytope works. For this work we were inspired to use similar steel cable structures that look like curved planes, folded into each other. For each location where N-Polytope is presented, the steel cable structure is adapted to the space were the work is exhibited and we try to create a connection with the architecture of the space.</p>

<p>On the steel cable structures modules are mounted that generate light and sound and also measure light and sound. The measurements are sent to a central computer which runs various machine learning algorithms over the course of the 14-minute composition. The timeline of the composition is determined by an additional ‘fixed media’ soundtrack composed by Chris Salter and Adam Basanta. This soundtrack is played over 4 broad range speakers situated around the space and 2 subwoofers.</p>

<h1 id="the-physical-components">The physical components</h1>

<p><img src="/images/npolytope/npolytope_vitra_1.jpg" alt="" /></p>

<p>The modules on the steel cable structures consist of</p>

<ul>
  <li>a microcontroller (Atmega328p)</li>
  <li>a second microcontroller (ATTiny841) that is programmed for sound synthesis</li>
  <li>an XBee for wireless communication</li>
  <li>3 light dependent resistors (LDR)</li>
  <li>1 electret microphone</li>
  <li>connections to three LEDs which are mounted separately on the steel cable</li>
</ul>

<p>The Atmega328p is the core of the module and handles wireless communication (via the XBee) with the main computer, measurements of the LDR’s, amplitude tracking of the microphone, communication with the ATTiny841 to control the sound synthesis, and pulse width modulation patterns for the LEDs.</p>

<h1 id="modular-computation">Modular computation</h1>

<h2 id="sound-synthesis">Sound synthesis</h2>

<p>The ATTiny841 is programmed with a fixed sound synthesis patch of three wavetable oscillators, where one oscillator controls the amplitude of the third one, and another the phase of the third one. Then after the third oscillator there is a 2nd order filter. Then the result is sent to a DAC, connected to a small amplifier and the speaker of the module.</p>

<p><img src="/images/npolytope/npolytope_soundsynthesis.png" alt="" /></p>

<p>Each oscillator has parameters for</p>
<ul>
  <li>the frequency</li>
  <li>the waveform (sine, sawtooth, triangle, pulse, dc<em>, noise</em>)</li>
  <li>the envelope, with attack and decay</li>
  <li>the duration</li>
  <li>the amplitude</li>
  <li>to play once or repeat the envelope</li>
</ul>

<p>For the waveform, only the third oscillator can be noise generator (not a wavetable then), and only the first two oscillators can be DC (so not a waveform, but a fixed value).</p>

<p>The synthesizer can be triggered with a wireless message from the computer.</p>

<p>For the frequency and duration parameters you can set a base value and a range within which a random value is chosen upon triggering the synthesis which is added to the base value. Then there are three modes for using the random range: no randomness (so just use the base value for the parameter), setting the randomvalue once when the setting is sent to the synthesizer, and choosing a random value each time the synthesizer is triggered.</p>

<h2 id="light-synthesis">Light synthesis</h2>

<p>For controlling the lights also a synthesis approach is chosen: the computer just sends parameters for the light pattern and then sends triggers to start the sequence.</p>

<p>For each LED there is an oscillator with</p>

<ul>
  <li>a range for the intensity between which the LED oscillates</li>
  <li>duty cycle (how long the waveform is)</li>
  <li>frequency</li>
  <li>waveform (triangle, sawtooth, reversed sawtooth, pulse, noise and DC)</li>
  <li>duration</li>
  <li>once or repeat (waveform cycle)</li>
</ul>

<p>Also here the frequency and duration can be set with a base value and a range for a random value to be added, with the same random modes as for the sound synthesis.</p>

<h2 id="sensing">Sensing</h2>

<p>The microcontroller is also reading out the microphone data at audio rate and doing a very simple envelope following on the measured value.</p>

<p>And the microcontroller is reading out the three LDR’s at a lower rate (1/6th of the audio rate): the microcontroller switches between reading the microphone signal and one LDR: so a sequence of microphone - LDR 1 - microphone - LDR 2 - microphone - LDR 3, and so on. This sequence ensure that the microphone is read out at a constant sample interval. For the LDR’s the speed is not so important as the light levels change on a much slower time scale.</p>

<h2 id="communication">Communication</h2>

<p>Finally, the microcontroller handles the communication with the main computer: it sends out the sensor data at a regular interval, and continuously listens for incoming messages that set the paramaters for the light and sound synthesis or trigger the LEDs or sound.</p>

<p>Also the sound synthesis microcontroller reports its calculated amplitude, which is send along with the sensor data. Reading out this communication also checks that the ATTiny841 is still up and running. If there is no communication for a while, the Atmega328p will reset the ATTiny841.</p>

<h2 id="concepts">Concepts</h2>

<p>In making these modules, there were a number of considerations.</p>

<p>On the microcontroller I had to negotiate tradeoffs between the processing power of the microcontroller and its memory size, the bandwidth for the wireless communication, and having a flexible system for experimentation and composition. Changing and updating the firmware is a tedious process that involves opening all the housings, taking out the microcontroller board, uploading the firmware, and putting the board back. Not a quick task with around 50 units, mounted on steel cables and for which a ladder or something else may be needed to reach the unit.</p>

<p>We wanted to have a system with many modules - creating actual sound sources that are spread out over the space, rather than using virtual sound sources in multi-speaker sound spatialisation setups. So we chose to do the sound synthesis locally and send over parameters: hence no need for complex cabling and sound cards with many channels. At the same time, of course having a fixed sound synthesis algorithm limits the amount of possible sounds, but we found with the designed algorithm we could reach a large number of different types of sounds: from very resonant ‘rinkling’ sounds to noisy snoring like sounds. And even though each individual module only has a 1W amplifier with a small speaker, all together the system can reach a good level of loudness.</p>

<p>The addition of the randomness to the frequency and duration of the sounds is a way of implementing variation in the sounds: it allows us to use the same preset of parameters for all modules, while still having a variation of sounds. The base value plus a random value within a certain range is also akin to Xenakis’ concept of <em>tendency masks</em> - ranges within which parameters of sound vary over the course of time.</p>

<p>Also using a synthesis approach for the light is remeniscent of Xenakis’ concept of viewing light as something that happens over time: the light synthesis parameters describe how the light behaves within a short time window from its triggering.</p>

<p>The modules implement the possible <em>micro-behaviours</em> for the light and sound. The presets for these are then sent from the central computer. This leaves the freedom to try out new presets when the system is set up in a space, without having to reprogram the modules and at the same time it limits the amount of communication needed to the module (since wireless bandwidth is limited and wired communication would require a more complex setup and a lot more cables).</p>

<h1 id="the-compositional-structure">The compositional structure</h1>

<p>As mentioned above the compositional structure then consists of a fixed media soundtrack and a score linked to it, that starts and stops:</p>

<ul>
  <li>various machine learning algorithms,</li>
  <li>presets and parameter changes for light and sound synthesis algorithms</li>
  <li>additional tasks that sequence the triggering for light and sound</li>
</ul>

<p><img src="/images/npolytope/npolytope_score_diagram.png" alt="" /></p>

<p>While composing the work, we (Sofian Audry, Chris Salter and myself) discussed the type of algorithms we would use: Sofian would program the algorithms using his machine learning library <a href="https://github.com/sofian/qualia">Qualia</a> and I would choose what type of presets for the microbehaviour of the light and sound would fit to the algorithm and with the soundtrack. Then we would look at the behaviour of the algorithm and tune the parameters for the machine learning algorithm and the presets. The score also determines which modules are active at a given time: so we make interventions on where the machine learning algorithms are active and where not to create a spatial dramaturgy for the work.</p>

<h2 id="the-machine-learning-algorithms">The machine learning algorithms</h2>

<p>The machine learning algorithms take different types of inputs: we have sensor data from the structure available: light and sound levels, and for some of the algorithms we use different metrics as well.</p>

<p>Some of the algorithms (<em>booster</em> and <em>chaser</em>) are based on reinforcement learning: the algorithm makes an observation (e.g. of the light and sound levels), determines an action to take, and then gets a reward which is calculated from a subsequent observation. The reward is simply a function of the values given in the observation: a mathematical formula. The algorithm then attempts to get the largest reward possible over time. Then it also has a parameter for how ‘curious’ or ‘exploratory’ it is to try out something new: take a complete different action in the hopes that the chosen action yields an even higher reward than it got with previous actions.</p>

<p>For all of the algorithms we are sending a trigger signal to calculate the next step for the calculation. This means that we can vary the speed of the algorithms.</p>

<p><strong>Firefly</strong></p>

<p>In the <em>firefly</em> algorithm each LED has three states: it can be flashing for a certain amount of time and after this time it will be ‘blind’ for a while: it will ignore its environment. If it is not flashing or blind, the LED will measure the amount of incoming light and compare that with the average amount of light over the past time (calculated from a moving average). When above the threshold, the a power variable will be increased and the firefly will be blind for a while.</p>

<p>Then the power variable is compared to a second threshold, and if it exceeds that threshold it will start flashing. If it has flashed for the set flash time, it will reset the blind time and the power variable and go back to the idle state.</p>

<p><strong>Drunk</strong></p>

<p>The <em>drunk</em> algorithm is a kind of random walk: the value for the LED intensity is a weighted sum of a random walk of a parameter for the overall grid, for each line, for each node and the individual LED.</p>

<p>For each parameter at each update a small random amount is added or substracted. Then the total value is calculated by adding the values up with a certain weight for how much influence each of the parameters has.
If the weight for the overall grid would be 1 and 0 for the other parameters, all LEDs would do the same. So the weights determine the amount of variation between the individual LEDs: from all the same to all completely different.</p>

<p>This algorithm uses no other inputs than the weights for the four parameters. These weights are changed over the course of the section.</p>

<p><strong>Booster</strong></p>

<p>The <em>booster</em> algorithm has as many agents are there are modules. The agents have a neural network with 5 inputs, 8 hidden layers and 1 output. It then uses an <em>epsilon-decreasing policy</em> for learning, which means that at the start the agent is more exploratory and later on the agent is more greedy.</p>

<p>The inputs to the network are the measured light value, the moving average of the light value, a parameter energy and a timer.
The energy is a slowly increasing value with time, and after a certain amount of time after emitting light, the energy increases also based on the light input.
The reward is calculated from the emitted led values, with some linearity built in, that boosts the ‘right’ decisions and punishes ‘bad bursts’.</p>

<p>When the agent decides to emit light, the energy and the timer are reset to 0.</p>

<p><strong>Chasers</strong></p>

<p>In the chasers algorithm the spatial layout of the structure is taken into account. Each line is regarded as a one-dimensional space where a <em>chaser</em> can move around. While moving the chaser is rewarded for touching another chaser (be on the same position), moving, or staying in the same position. The position on the line is defined as the position of an LED, so if there are three modules with each three LEDs mounted on a steel cable, that line has 9 possible positions for a <em>chaser</em> to be. The position of the chaser is visualised by triggering the LED and auralised by triggering the sound of the module the LED belongs to. If the reward for the chaser is larger than 0.5 (the range is between 0 and 1), the parameter <em>repeat</em> is set for the light preset.</p>

<p>In the score, more and more chasers are added to each line. Initially the light preset is a short flash. Later on in the chaser section, the light preset is changed to a longer flickering light, creating more overall brightness in the space.</p>

<p>Note that in this algorithm the only input of the algorithm is the position of the other chasers - no sensor data is used.</p>

<p><strong>Genetic algorithm</strong></p>

<p>The genetic algorithm was added in 2017 for the exhibition at the MAC in Montreal. The idea was that instead of setting the parameters for the presets of the light and sound synthesis directly, a genetic algorithm would try to mutate from its current state to the new target state of the parameters. In this way the algorithm attempts to replicate the ‘normal’ score, but takes a bit of time before it has found the right parameter set.</p>

<p>The parameter set is interpreted as a string of bits (1’s and 0’s), a <em>binary chromosome</em>. At each step of calculation of the algorithm, it selects two members of the population, ‘mates’ them to create two new ‘children’ and selects the fittest of the two offspring, i.e. the one closest to the new target preset.</p>

<p>We created two instances of this algorithm: one with a population of the size of the number of nodes to approximate the sound presets, and one with a population the size of the number of LEDs to approximate the light presets.</p>

<p>The ‘genetic’ mode we thus created was then added as an alternative ‘ambient’ mode to play along the ambient soundtrack.</p>]]></content><author><name></name></author><category term="sound" /><category term="light" /><category term="machinelearning" /><category term="composition" /><summary type="html"><![CDATA[This post was originally written in 2018, but then not published on the website yet. I’m happy to post it now, finally! In this post, I will describe how the sound and light composition of N-Polytope is structured. For this I will start with the physical components, describe the sound synthesis and light synthesis algorithms, and then go to the bigger picture of the composition with these instruments and how the machine learning algorithms are used.]]></summary></entry><entry><title type="html">Interview on WoNoMute</title><link href="http://localhost:4000/2021/06/01/interview-on-wonomute.html" rel="alternate" type="text/html" title="Interview on WoNoMute" /><published>2021-06-01T00:00:00+02:00</published><updated>2021-06-01T00:00:00+02:00</updated><id>http://localhost:4000/2021/06/01/interview-on-wonomute</id><content type="html" xml:base="http://localhost:4000/2021/06/01/interview-on-wonomute.html"><![CDATA[<p>I was interviewed by WoNoMute (Women Nordic Music Technology), a horizontal network organization that promotes the work of those identifying as women in the interdisciplinary field of music technology. The term music technology is used here to define activities related to the use, development and analysis of technology applied to sound and music. It is an umbrella term that connects researchers and practitioners, as well as engineers and social scientists. WoNoMute previously interviewed (amongst others): Rebecca Fiebrink, Rebekah Wilson, Pamela Z, Natasha Barrett and Alexandra Murray-Leslie.</p>

<p>Read the interview here: <a href="https://wonomute.no/interviews/marije-baalman/">https://wonomute.no/interviews/marije-baalman/</a></p>]]></content><author><name></name></author><category term="interview" /><category term="sound" /><summary type="html"><![CDATA[I was interviewed by WoNoMute (Women Nordic Music Technology), a horizontal network organization that promotes the work of those identifying as women in the interdisciplinary field of music technology. The term music technology is used here to define activities related to the use, development and analysis of technology applied to sound and music. It is an umbrella term that connects researchers and practitioners, as well as engineers and social scientists. WoNoMute previously interviewed (amongst others): Rebecca Fiebrink, Rebekah Wilson, Pamela Z, Natasha Barrett and Alexandra Murray-Leslie. Read the interview here: https://wonomute.no/interviews/marije-baalman/]]></summary></entry><entry><title type="html">Documentation of Wezen-Handeling</title><link href="http://localhost:4000/2021/05/08/documentation-of-wezen-handeling.html" rel="alternate" type="text/html" title="Documentation of Wezen-Handeling" /><published>2021-05-08T00:00:00+02:00</published><updated>2021-05-08T00:00:00+02:00</updated><id>http://localhost:4000/2021/05/08/documentation-of-wezen-handeling</id><content type="html" xml:base="http://localhost:4000/2021/05/08/documentation-of-wezen-handeling.html"><![CDATA[<p><em>In this post, I am documenting the work Wezen-Handeling, as it was performed and livestreamed on May 7, 2021.</em></p>

<!--more-->

<h1 id="documentation-of-wezen---handeling">Documentation of Wezen - Handeling</h1>

<p>This work is a sequel to <em>Wezen - Gewording</em>. The sound instruments that I use are similar to ones I developed for that piece. The code base and the processing algorithms have been redeveloped for this new piece.</p>

<p>In the work I am using inspiration from the work of different artists, whose work I studied as part of writing the book <a href="https://justaquestionofmapping.info">Just a question of mapping</a>:</p>

<ul>
  <li>pose detection: inspired by Cathy van Eck and Roosna &amp; Flak</li>
  <li>flick detection: inspired by Roosna &amp; Flak</li>
  <li>recording / playback of control data: inspired by STEIM’s The Lick Machine</li>
  <li>button processing: inspired by Erfan Abdi’s approach for chording</li>
  <li>the use of some control data to set starting parameters and others for continuous control is inspired by Jeff Carey.</li>
  <li>matrix / weights to connect input parameters to output parameters is inspired by Alberto de Campo’s Influx.</li>
</ul>

<p><img src="/images/handeling/baalman-handeling-1.png" alt="" /></p>

<h2 id="sound-layers">Sound layers</h2>

<p>In <em>Wezen - Handeling</em> there are two different layers of sound:</p>

<ul>
  <li>sound instruments triggered directly from movements with the left hand and controlled with the right hand</li>
  <li>sound instruments controlled from recorded triggers and movement data. This recorded data can be looped, sped up and slowed down, the playback length and the starting point in the recorded buffer can be changed, and additional parameters can be modulated.</li>
</ul>

<h2 id="the-controllers">The controllers</h2>

<p>The controllers that I use are open gloves with five buttons each (one for each finger) and a nine-degrees-of-freedom sensor. The data from these gloves is transmitted via a wireless connection to the computer, where it is translated to Open Sound Control data and received by SuperCollider. The core hardware for the gloves is the <a href="https://www.sensestage.eu">Sense/Stage MiniBee</a>.</p>

<p>The gloves were orginally designed and created for a different performance, <em>Wezen - Gewording</em>, where the openness of the glove was a necessity, since I also needed to be able to type to livecode. The nine-degrees-of-freedom sensor was added to this glove later: in <em>Wezen - Gewording</em> I only made use of the built-in 3-axis accelerometer of the Sense/Stage MiniBee.</p>

<p>Of the right controller the gyroscope is not working, so effectively, that is a six-degrees-of-freedom sensor. During the development of the work, I discovered this defect, and it motivated my choice to only use flicks in the left hand, and use the right hand for continuous control.</p>

<p><img src="/images/handeling/baalman-handeling-2.png" alt="" /></p>

<h2 id="preprocessing-of-data">Preprocessing of data</h2>

<p>To preprocess the data from the gyroscope, accelerometer and magnetometer, I first split the data into these three different streams (each having 3 axes), and do a range mapping with a <em>bipolar spec</em>: that is I am using a non-linear mapping from the integer input data range, to a range between -1 and 1, in such a way that the middle point is at 0, and the resulting range feels linear with regard to the movements I make.</p>

<p>The for each of the three, I apply a exponential smoothing filter. In certain cases I use the difference output of this filter: the raw input value minus the smoothed filter output. This difference output is then processed with another exponential smoothing filter. In some cases this second filter has another parameter for the signal becoming larger, versus the signal becoming smaller.</p>

<h2 id="poses-and-flicks">Poses and flicks</h2>

<p>To start and stop sounds, I am using a combination of flick detection and pose detection.</p>

<p>The pose selects the instrument that will be triggered. The pose detection algorithm makes use of prerecorded data of three different poses. The continuous smoothed data from the accelerometer and magnetometer is compared with the mean of the prerecorded data to determine whether the current value is within an factor of measured variation of the prerecorded samples. If that is the case, the pose is matched.</p>

<p>The flick detection makes use of the difference of the gyroscope data with its smoothed version. If this difference crosses a certain threshold, an action is performed. Different thresholds can be set for different axes and different directions of the data. The detected pose is used as additional data to determine the action to trigger.</p>

<p>To avoid that movement to flick affects the pose detection, the pose detection is only enabled when the summed smoothed difference signal of the gyroscope data is below a certain value.</p>

<p><img src="/images/handeling/handeling-flick-and-pose.jpeg" alt="" /></p>

<h2 id="instrument-launching-and-control">Instrument launching and control</h2>

<p>Instruments are started, or <em>launched</em>, with flicks of the left hand when the palm of the hand is moved downwards (if you look at the arm held horizontally forwards with the back of the hand pointing up). Other flicks (upward, and those rolling the wrist to left or right) stop the instrument again.</p>

<p>The right hand data is used to control the different parameters of the sound:</p>

<ul>
  <li>The position of the right hand when a sound is started selects a chord (a set of multiplication factors for the frequency of the sound) with which an instrument is started. This chord cannot be changed after launch.</li>
  <li>The movement of the right hand then controls different parameters continuously as long as the sound is playing.</li>
</ul>

<p>The input for control are both the smoothed and the difference with the smoothed data from the accelerometer and the magnetometer.</p>

<p>For each instrument there are differences in how exactly the different parameters affect the sound. Some concepts that prevail are:</p>

<ul>
  <li>smoothed values are used for frequency, filter frequency, pulse width, and panning position.</li>
  <li>difference values are used for amplitude and filter resonance.</li>
</ul>

<p><img src="/images/handeling/handeling-right-hand-control.jpeg" alt="" /></p>

<h2 id="lick-recording-and-playback">Lick recording and playback</h2>

<p>The buttons on both gloves are used to control the Lick Machine.</p>

<p>While the <em>thumb button on the right hand glove</em> is pressed, the data that is used for instrument control and the output data from the flick detector and the pose detector (at the moment of flicking) is recorded into a <em>lick</em> buffer. This happens as long as the button is pressed. At a button press, the buffer is first cleared and then recorded into. The data is always recorded into the <em>current lick buffer</em>.</p>

<p>In total there are 16 <em>lick buffers</em>. With the <em>ring and pink buttons of the right glove</em> the current lick buffer can be changed.</p>

<p><img src="/images/handeling/handeling-buttons-right-lick-control.jpeg" alt="" /></p>

<p>When the <em>thumb button on the left hand glove</em> is pressed flicks with the left hand trigger playback of <em>licks</em>, rather than instruments. The pose and flick axis and direction determines which lick will be playing back. When also the <em>index button of the left glove</em> is pressed, the lick will play in looped mode, otherwise it will just play once and stop.</p>

<p><img src="/images/handeling/handeling-buttons-left-lick-control.jpeg" alt="" /></p>

<h2 id="lick-modulation">Lick modulation</h2>

<p>The playback of the licks can then be modulated:</p>

<ul>
  <li><em>the index finger of the right glove</em>: enables replacing the recorded data  with the current data from the right hands’ accelerometer and magnetometer.</li>
  <li><em>the middle finger of the right glove</em>: enables controlling additional paramaters of the sound with data from the right hands’ accelerometer and magnetometer.</li>
  <li><em>the middle finger of the left glove</em>: enables changing the tempo of playback with an up and down movement of the right arm.</li>
  <li><em>the ring finger of the left glove</em>: enables changing the start position in the lick buffer by rolling the right arm.</li>
  <li><em>the little finger of the left glove</em>: enables changing the playback length of the lick buffer by rolling the right arm.</li>
</ul>

<p>When these three left glove buttons are pressed simultaneously, all three parameters are modulated at the same time.</p>

<p>The modulation only takes effect on the currently selected <em>lick</em>. If both <em>ring and pink buttons of the right glove</em> are pressed, the modulation affects all licks simultaneously.</p>

<p><img src="/images/handeling/handeling-lick-modulation.jpeg" alt="" /></p>

<hr />

<p>Flowchart images created with the <a href="https://mermaid-js.github.io/mermaid-live-editor">Mermaid live editor</a></p>]]></content><author><name></name></author><category term="sound" /><category term="composition" /><category term="body" /><category term="instrument" /><category term="gesture" /><summary type="html"><![CDATA[In this post, I am documenting the work Wezen-Handeling, as it was performed and livestreamed on May 7, 2021.]]></summary></entry></feed>