Skip to main content
SearchLoginLogin or Signup

The Odd Couple

AI and human pianist perform duo improvisation in real-time

Published onAug 29, 2023
The Odd Couple
·

Abstract

We propose to perform a duo improvisation between pianist David Dolan and an artificial improviser developed by Oded Ben-Tal. The improvisation is based on an expanded tonal-modal idiom but does not conform to a specific musical style. The artificial improviser uses machine listening techniques to extract features from the piano’s audio signal, analyses them in real-time, and aims to make musical inferences about the content. This information is used by the system to generate responses that open up space for musical dialogue between the pianist and the system. Ben-Tal is adjusting parameters in the system during the performance to shape larger-scale aspects but the moment-to-moment generation of musical material is done automatically within the computer.

Context

Computational modelling of improvisation mostly fall into one of two camps. Some systems aim to model music of a specific style[1], others are developed as an extension of an improviser’s own musical idiom[2]. In our case, the system does not model a specific style. Dolan’s improvisation draws on the rich history of Western tonal music from Baroque to late Romantic styles and integrates those into a unique and personal idiom. The system that Ben-Tal is developing, through collaboration with Dolan, does not try to model tonal-modal thinking or directly imitate Dolan’s material. Neither is it mainly an extension of Ben-Tal’s own musical idiom – it aims to open a space for human-computer musical dialogue by generating novel material that draw on Dolan’s music on the one hand and Ben-Tal’s compositional sensibilities on the other. The research-creation process is therefore one of joint discovery as both Dolan and Ben-Tal venture beyond their previous musical experiences.

The AI system

Two fundamental design decisions informed the development of this artificial improviser. First, Ben-Tal is a composer. He did not want to develop a musical instrument that will allow him to improvise. His preference was for a system that handles most or even all of the moment-to-moment sound production automatically. His role – on stage – would be as a supervisor or executive: monitoring the system’s behaviour, influencing its responses but without the need for constant action. However, the system needs to react musically to the improvisation in real-time. To enable that, the system uses machine listening: the computer extracts features from the audio signal and tries to make musical ‘sense’ out of them. The second important design decisions was to try and minimise hard-coded assumptions about the input. While Dolan improvises within the realm of expanded tonality, the conventions of tonality are not coded. The computer does not look for a regular pulse, major-minor chords, cadences, or other stylistic conventions. Instead, the computer extracts pitch, time and some timbral aspects from the live performance and uses this data to generate responses. The use of decisions making within the code – decided whether to respond or not and deciding how to respond imbues the system with a degree of creative autonomy. The integration of machine listening with generative processes to determine the sounds produced leads to a system that is responsive but capable of surprising the human player at the same time.

Name/Affiliation/Bio

Oded Ben-Tal is a composer and researcher working at the intersection of music, computing, and cognition. His compositions include both acoustic pieces, interactive, live electronic pieces and multimedia work. As an undergraduate he combined composition studies at the Jerusalem Academy of Music with a BSc  in physics at the Hebrew University. He earned his Doctorate at Stanford university working at CCRMA with Jonathan Berger and studying composition with Jonathan Harvey and Brian Ferneyhough. Since 2022 he is leading an international AHRC-funded research network “Datasounds, Datasets and Datasense”. He is an Associate Professor at the Performing Arts Department, Kingston University

David Dolan, an international concert pianist, researcher and teacher has devoted a part of his career to the revival of the art of classical improvisation and its applications in performance. In his world-wide solo and chamber music performances, he returns to the tradition of incorporating extemporisations within repertoire in embellished repeats, eingangs & cadenzas, as well as improvised preludes, interludes and fantasies.Yehudi Menuhin’s response to this CD, “When Interpretation and Improvisation Get Together”, was: “David Dolan is giving new life to classical music.” David is a professor of classical improvisation and its application on solo and chamber music performance at the Guildhall School of Music and Drama in London, where he is the head of the Centre for Creative Performance and Classical Improvisation. He also teaches at the Yehudi Menuhin School, and has been conducting masterclasses and workshops in major music centres and festivals worldwide.

Programme Notes

The collaboration between professor D. Dolan and Dr. O. Ben-Tal emerged out of AHRC research network: Datasound, Datasets, and Datasense which Dr. Ben-Tal is leading. In line with the aims of the network, this collaboration creates a dialogue across disparate musical practices: between analogue and digital music tools; between a performer-improvisor and a composer-programmer. The improvisations are based on an expanded tonal-modal idiom but do not conform to a specific music style. The computer ‘listens’ to the pianist (extracting musical data from microphone input) and generates responses in real time during the performance, based on this data on the one hand and generative compositional processes programmed by Ben-Tal. The result is a new form of musical dialogue, created by the possibilities of new technology and drawing on the wealth of 300 years of music-making. Ben-Tal is adjusting parameter in the system during the performance to shape larger-scale aspects, but thew moment-to-moment generation of the AI musical material is done automatically with the computer.

Duration

Our preference would be for 2 short contrasting improvisations (5-10 minute each). Most suitable to the AI concert (31/8)

Two excerpts from concert performance

Example1 Example2

We also submitted an article about this work to the paper track of the conference.

Technical requirements

  1. Piano

  2. Two condenser microphones + mic stands

  3. Stereo input and output (XLR cables).

  4. Small table

  5. Chair

Comments
0
comment
No comments here
Why not start the discussion?