Skip to main content
SearchLoginLogin or Signup

AI Improvised Music Duo

Published onAug 29, 2023
AI Improvised Music Duo
·

Abstract

AI Music Improvisation by Franziska Schroeder (sax/AI art) and Federico Reuben (ML/live coding).

This a 20-30mins duo live improvised music performance set, using machine listening and AI generated music materials. The PRISM SampleRNN / neural network has been trained, using original saxophone input by Franziska Schroeder.

In this duo we incorporate live improvised music, with AI generated source material (i.e. some of the input from the PRISM neural network training data set), combined in a live coding setup.

In terms of AI techniques for audio generation, we will be using two approaches:

  1. We've generated audio material (with different settings, e.g. temperature) using PRiSM SampleRNN (https://github.com/rncm-prism/prism-samplernn. implemented by Christopher Melen) trained on a large dataset of recordings of Franziska playing the saxophone. We will be using the pre-rendered audio files of the generated audio, in combination with the original recordings in the dataset, during the live performance as material that will be intuitively remixed through live coding - the live coder will be making decisions during the performance, selecting between different files and using the live analysis of Franziska's input (through a microphone) to match it (by selecting different audio features/parameters during the performance) to different segments of the different audio files.

  2. We've trained a RAVE model (https://github.com/acids-ircam/RAVE, Caillon and Esling) using Franziska's recordings and we will use the RAVE UGen in SuperCollider (https://github.com/victor-shepardson/rave-supercollider, implemented by Victor Shepardson from the IIL in Iceland) for real-time processing (latent representation and autonomous generation).

Both of these approaches will be used within the context of the improvisation, and the live coder will co-create with the live performer and computer by intuitively accessing these AI-generated materials, and mix them with other effects, algorithmic processes, analysis and samples, depending on what is happening during the improvisation. We will be using SuperCollider with some live coding libraries by Reuben (https://github.com/freuben/Radicles) as well as other well known libraries such as JITLib.

Visuals using #StableDiffusion will be projected. The visuals are based on a prompting technique developed by Franziska, and that connect intimately to the performance, the specific performers and their instruments.

The duo explored similar AI ideas in a performance as part of the AI and Music conference, KTH Royal Institute of Technology, Sweden, Stockholm, 2022.

Youtube Links to previous Improvisations, using AI generated visuals:

https://www.youtube.com/watch?v=MVPIg0remdg&feature=youtu.be

#AiArt, #StableDiffusion #Deforum - improvised materials using saxophone and live coding. Live recording in Stockholm, Sweden, 21 November 2022.

https://youtu.be/cF7EeZOuDxY

A triptych representation of an AI generated architectural model with processed saxophone sounds.

Name/affiliation/bio

Franziska Schroeder is a saxophonist and improviser, originally from Berlin.

She has recorded her music on diverse labels (pfMentum, Creativesources, Bandcamp). Her research on ethnographies of improvisation cultures in Brazil (2013) and Portugal (2016) are published online. Franziska works as a Professor of Music and Cultures at the Sonic Arts Research Centre, at Queen’s University Belfast, where she also leads the research team “Performance without Barriers” - a group dedicated to researching more accessible and inclusive ways to designing music technologies for and with disabled musicians. The group’s agenda-setting research in designing virtual reality instruments was recognised by the Queen’s Vice Chancellor’s 2020 Prize for Research Innovation.

https://improvisationresearch.com

www.improvisationinportugal.wordpress.com

http://improvisationinbrazil.wordpress.com

http://performancewithoutbarriers.com/

 

Federico Reuben is a composer, sound artist and live-electronics performer. His work includes compositions for acoustic, electroacoustic and mixed ensembles, laptop improvisations, computer-mediated performances, fixed media, hybrid works, installations, collaborations and computer programs. As a laptop improviser he has performed with improvisers such as Elliott Sharp, John Edwards, Steve Noble, Mark Sanders, London Improvisers Orchestra, Tony Marsh, Alekander Kolkowski, Ingrid Laubrock, Alexander Hawkins, Dominic Lash, Rachel Musson, Javier Carmona, Mark Hanslip and Paul Hession. He is also co-founder of netlabel and artist collective squib-box with Adam de la Cour and Neil Luck. Currently, he is Associate Professor at the School of Arts and Creative Technologies, University of York, where he is director of research. Federico is Co-I in the AHRC-funded network ‘Datasounds, Datasets and Datasense: Unboxing the hidden layers between musical data, knowledge and creativity’. 

https://www.federicoreuben.com/

Programme Notes

This is a duo of live improvised music using AI-generated music materials and ML techniques. We trained PRiSM SampleRNN model using a dataset of original saxophone recordings by Franzisksa to generate new audio materials. The original dataset, as well as the pre-rendered audio materials, are accessed during the improvisation through stochastic and MIR techniques. We also trained the RAVE model using Franziska’s saxophone input and we use the RAVE UGen (ported by Victor Shepardson) in SuperCollider for real-time processing. Both of these approaches will be used in the improvisation, and the laptop improviser will interact with the saxophonist by intuitively accessing these AI-generated materials and techniques, as well as mixing them with other effects, sample and algorithmic processes, depending on what is happening during the improvisation. We will be using Supercollider with libraries by Federico, as well as other well known love coding libraries such as JITLib. Visuals using StableDiffusion will be projected. The visuals are based on a prompting technique developed by Franziska, and that connect intimately to the performance, the specific performers and their instruments.

Comments
0
comment
No comments here
Why not start the discussion?