A cocoon lies on stage, barely recognisable in the dim light. An almost static chord is sounding in the space. The light changes in color – the chord is changing slowly in pitch and texture. The cocoon seems to move – the light seems to get brighter and gradually shift color. Suddenly a fast movement – a sound erupts into the space, the light flashes – then again everything quiets down – the movement, the sound, the light.

Chrysalis is a performative environment where the focus is on slow movements – becoming aware of the minimal movements of the body, both conscious and unconscious movements. It is about interaction on a long timescale – involving multiple senses (hearing, sight, haptic).

In the first phase the project is developed as a performance in which the maker herself is the performer of these small movements.

In the second phase of the project, the environment will be expanded to a costume or object, that the visitor can wear or enter to have the experience herself.

Chrysalis - a teaser from Marije Baalman on Vimeo.

Echo State Networks and Conceptors

During the residency in Sussex, I am looking with Chris Kiefer at possible approaches to take using machine learning algorithms.

Together we look into Echo State Networks (other terms: 'recurrent neural networks' and 'reservoir computing') and Conceptors.

The special feature of these algorithms is, is that they can deal with time series data and have memory, in other words: ideal for dealing with realtime sensor data.

After a few explorations we come up with the following scheme to try out:

  • the system will have three modes - buttons are used to switch between the modes:

    • learn: the network will entrain itself to the incoming accelerometer data. The network should start oscillating along with the incoming data. Once this has happened, the performer can decide to store the pattern as a conceptor.
    • morph: the network is driven by a mix of the learnt conceptors. The mixing or morphing factors are determined by the stretch sensors.
    • spectral radius: the spectral radius, or the internal weights, of the network are modified by the stretch sensors, this will amplify or diminish the oscillations that happen in the system.
  • we use a grid search to find the best networks: that means that we will be training a multitude of networks (e.g. 10) and pick the ones with the lowest/best error rates.

  • the output data of the system will be used to change the hue and intensity of the lights projected onto the cocoon.

  • the activation vectors will be sonified.

Tue, 12 September, 2017


Between the residency in Nantes in August 2016 and the residency in Sussex in September 2017, several performances have taken place:

  • August 20, 2016, ElectroPixel #6, Nantes, France
  • September 30, 2016, Metabody Toulouse, Nuit des Chercheurs, Quai des Savoirs, Toulouse, France
  • November 6, 2016, Sounds Like Soup, De Groene Gemeenschap, Amsterdam, The Netherlands
  • February 11, 2017, “What if this is gallery dance?”, 4bid Gallery, OT301, Amsterdam, The Netherlands
  • May 16, 2017, NIME 2017, Stengade, Copenhagen, Denmark

Over the course of these, I have only made small changes: finetuning the sound and light design.

Also I have experimented with using some buttons to control the timing of the performance, while I am inside, and haptic feedback to give myself an indication of how long the performance has been running.

One observation that I made is that the system has a time constant that is more or less predictable: after having performed with the system for a while, I more or less now how fast the system reacts, can go into a chaotic mode and then relax back to a stable mode. This observation is what drives me to explore a different approach for the algorithm.

Fri, 01 September, 2017
RSS Feed