Skip to main content
SearchLoginLogin or Signup

One, Two, Many

For two flutes and live electronics

Published onAug 29, 2023
One, Two, Many
·

Abstract

One, Two, Many asks two human players to co-create a performance with an AI inspired system. The over-arching principle driving the AI system is the metaphor of an impatient listener, one who seeks novelty. First, MIR features are extracted in real-time from the audio signal of each flute, in the next step the computer tries to estimate stability vis. change in those feature signals at different time scales. Based on this estimation the computer activates and deactivates processing of the flute signals. The computer, therefore, interprets an audio signal and changes musical behaviour based on this interpretation: an agent that listens and responds.

Context

In a recent publication[1] I traced the development of my approach to live, interactive electronics based on mutual listening between computer and performer(s). The ability of the computer to listen to the performers and adapt its response in somewhat unexpected ways creates a degree of creative autonomy[2]. This autonomy is achieved both through the algorithmic design of the components– the ways the computer listens and reacts depending on the input – as well as through the emergent properties of a complex, interacting system. Applying Rowe’s terminology[3] this falls close to the player paradigm end of the spectrum (as opposed to the instrument paradigm).

The AI system

The features extracted include estimates of perceptual loudness, chroma vectors and spectral distribution. Estimating stability or similarity happens at different time scales. At the local level (under 1 second) the ability to accurately predict the feature signal (with Kalman filters[4]) is used as a proxy for stability. For longer durations (5-15 seconds), cumulative feature similarity (e.g. similar chroma profile) are used. Accurate prediction and similarity between time stretches increases a boredom factor while unpredictable and dissimilar results reduce it. When the boredom hits a threshold the computer ‘acts’ - turning electronic processes on or off thus enacting musical change and resetting the boredom ‘meter’.

There are four electronic processes available for the computer to turn on/off. These are not fixed, static effects but are dynamic processes using algorithmic composition methods and in some cases mapping MIR data as well. The processing of the audio signal is primarily based on spectral manipulation.

The score

Since listening is at the heart of the multi-party interaction envisioned, the score is designed to allow the performers a degree of freedom. Freedom to respond to and try to influence the computer as well as ability to respond to each other. The parts are loosely coordinated – pages act as coordination units. But within each page players have some optional figures, alternatives and choice of repetition. The performers, therefore, are able to control, to a degree, the similarity/change in the music and thus influence the responses from the computer.

The piece would fit best in the AI concert on August 31st. Duration: ca. 7 minutes.

A note about the audiofile included

The file included in the submission is there only to illustrate the electronics. I ran old flute snippets I recorded several years ago (in the process of composing a different flute piece). I used these to test the system as I was building it. Listening to the file should give an idea of the conjunction of flute sounds with the spectral processing and the activation/deactivation of those. But the flute snippets fed into the system have nothing to do with the score. This also illustrates that the system works in principle and operates as expected. The file is not an edited version stiched together – I ran the system with the simulated input and captured the result. A live situation, as opposed to soundfile input, has its own challenges but these are standard and familiar and can be addressed in rehearsals and sound-checks.

Technical requirements

Simple stereo inputs – microphone for each flute player – and outputs.

The flute players should be placed apart on the stage – left and right – allowing for as good a separation of the signal from each other as possible.

Placement of the stereo speakers should aim for minimising signal bleed and feedback while supporting blend and integration of the acoustic and electronic sounds.

Name/Affiliation/Bio

Oded Ben-Tal is a composer and researcher working at the intersection of music, computing, and cognition. His compositions include both acoustic pieces, interactive, live electronic pieces and multimedia work. As an undergraduate he combined composition studies at the Jerusalem Academy of Music with a BSc  in physics at the Hebrew University. He did his doctoral studies at Stanford university working at CCRMA with Jonathan Berger and studying composition with Jonathan Harvey and Brian Ferneyhough. Since 2016 he is working on a multidisciplinary research project applying state of the art machine learning to music composition. He is a senior lecturer at the Performing Arts Department, Kingston University

Programme Notes

One, Two, Many asks two human players to co-create a performance with an AI inspired system. The overarching principle driving the AI system is the metaphor of an impatient listener, one who seeks novelty. First, MIR features are extracted in real-time from the audio signal of each flute, in the next step the computer estimates stability vs. change in those features at different time scales. Based on this, the computer changes its own musical behaviour.

Comments
0
comment
No comments here
Why not start the discussion?