Notochord and Scramble are two probabilistic real-time generative MIDI models. Notochord is a ‘big data’ model which has eaten all the MIDI files on the internet, while Scramble is a ‘small data’ model which allows the user to hybridize individual MIDI files and sampled snippets. In this performance, two players will wrangle Notochord and Scramble, struggling to make sense from their infinite capacities for nonsense. Notochord will extrude big slabs of quasi-music which will be filtered and refined by Scramble before being re-harmonized by Notochord, ad nauseum. We human operators will push buttons and frantically twiddle knobs, sweat beading on our foreheads, as our wayward creations attempt to submerge us in a swimming pool of glistening General MIDI sounds.
Deep learning-based probabilistic generative models are making a splash. Between the public release of Stable Diffusion and popular fascination with ChatGPT, interaction with generative models promises to become a large component of computational (co-)creativity and HCI in general.
At the same time, we see a mismatch between what generative models can do and how they are often presented. Generative models built on large datasets interpolate their training data; the apparent agency and creativity of such models are illusory, flowing from their uncanny abilities to find plausible continuations of human input. Yet the most popular applications do not emphasize attribution to the training data or uncertainty — chatbots confidently invent falsehoods [1], while image models will mix novel images in with near-copies of their training data [2]. Training data mined from the past is swept under the rug and repackaged as something new.
Conversely, models trained on small sized samples often converge into stereotyped behaviours, thus conveying musical meaningfulness through repetition of small patterns unpredictably extracted from the training data.
Our performance tries to bridge this gap, being on one hand playfully exploratory of our models’ capabilities, but on the other absurd and cuttingly ironic around their limitations. Rather than cloaking the source material in sound design, we use ‘obsolete’ soundfont synthesizers which are ‘native’ to the training data. As performers, we oscillate between earnest co-improvisation with our machines and concocting impossible situations for them to deal with. A simple projected visual element will make the actions of Notochord and Scramble more legible to the audience, for example as scrolling text logs of MIDI events or flashing keyboards glyphs.
We expect our performance to dialogue well with the rest of AIMC. From last year’s performance program for example we see affinities with Bob Sturm’s surreal generative pastiche, Yiğit Kolat’s embedding of improvisers with automated decision making models, and Farzaneh Nouri’s study of machine learning-derived software agents.
This will be an improvised live performance, with duration flexible from ~5-20 minutes. The technical means to realize our performance are already available: Notochord [3] is available as open-source software1, and Scramble [4] is available for free download2.
Our performance would fits best at the August 31st AI concert but could also play well at the algorave.
We will provide computers, smaller MIDI controllers, and audio interfaces.
Ideally the conference can provide a large multi-octave MIDI keyboard controller, stereo or multichannel diffusion, a table for two players with laptops, projection, and power.
I am a doctoral researcher in the Intelligent Instruments Lab at LHI. Previously I worked as a machine learning engineer on neural models of speech, and before that I studied Digital Musics at Dartmouth College and Computer Science at the University of Virginia. My interests include machine learning, artificial intelligence, electronic and audiovisual music, and improvisation. In my current research, I approach the lived experience of people with AI via design and performance of new musical instruments. My projects include the Living Looper, which reimagines the live looping pedal through neural synthesis algorithms, and Notochord, a probabilistic model for MIDI performances.
I’m a PhD candidate in Cultural Studies, conducting my research at the Intelligent Instruments Lab. Previously, I studied Electronic Music at the Conservatory of Padua (MA), Jazz Improvisation and Composition at the Conservatory of Trieste (BA) and Modern Languages and Cultures at the University of Padua (BA). In the last ten years I have been curating musical events and festivals, composing, performing and teaching music. My current interests include alternative forms of notation, improvisation, composition, and Human-Computer Interaction in performative contexts. My project focuses on AI explainability in music performances.
Notochord and Scramble are two generative MIDI models of our creation. Notochord has eaten all the MIDI files on the internet, while Scramble digests and regurgitates individual MIDI files and sampled snippets. We will wrangle Notochord and Scramble, struggling to make sense from their infinite capacities for nonsense. Notochord will extrude big slabs of quasi-music which will be filtered and refined by Scramble before being re-harmonized by Notochord, ad nauseum. The human operators will push buttons and frantically twiddle knobs, as their wayward creations attempt to submerge them in a swimming pool of glistening General MIDI sounds.