Elaine Chew is Professor of Engineering joint between the Department of Engineering (Faculty of Natural, Mathematical & Engineering Sciences) and the School of Biomedical Engineering & Imaging Sciences (Faculty of Life Sciences & Medicine) at King's College London. An operations researcher and pianist by training, Elaine Chew is a leading authority in music representation, music information research (MIR), and music perception and cognition, and an established performer. A pioneering researcher in MIR, she is forging new paths at the intersection of music performance and cardiovascular science. Her research focuses on the mathematical and computational modelling of musical structures in performed music and in electrocardiographic traces, with application to music-heart-brain interaction and computational arrhythmia research. Her work has been recognised by the ERC, PECASE, NSF CAREER, and Harvard Radcliffe Institute for Advanced Study Fellowships. She is an alum (Fellow) of the NAS Kavli and NAE Frontiers of Science/Engineering Symposia. Elaine received PhD and SM degrees in Operations Research at MIT, a BAS in Mathematical & Computational Sciences (honours) and Music (distinction) at Stanford, and FTCL and LTCL diplomas in Piano Performance from Trinity College, London.
Abstract: Performer-centered AI and Creativity
"When a pianist sits down and does a virtuoso performance he is in a technical sense transmitting more information to a machine than any other human activity involving machinery allows" ~ Robert Moog
Performance is the primary medium through which music is communicated, whether through physical acoustic or electronic instruments. It is the conduit for the flow of artistic and intellectual ideas, and for the shaping of emotional experiences. But capturing the formidable creativity underlying compelling performances presents many challenges, not the least being the fleeting nature of performance and the lack of a written tradition allowing it to be represented and held still for close study. Over a series of vignettes, I shall present a biographical narrative of my search for ways to capture and represent the ineffable know-how of performance from the vantage point of a pianist. The journey spans a range of efforts to name and make models of the things performers do, such as the choreography of tension based on harmony and time, stretching the limits of notational technology to represent performed timings (also applied to arrhythmic sequences), and tapping into the volunteer thinking of citizen scientists to detect the structures performers create and to design or compose a performance. Most recently, the research turns to healthcare, where performed structures are a means to modulate cardiovascular autonomic response.
About Dadabots
CJ met Zack at Berklee College of Music, back when they used to play instruments. After falling down the rabbit hole of python, arXiv, and github - they formed Dadabots, a mythical AI death metal band & hackathon team & neural audio synthesis research lab. As Prometheus brought fire to man, Dadabots bridged the ivory tower into music culture, carrying Theano models with buggy dependencies, GPU blazing, ushering the earliest neural audio synthesis experiments into the hands of musicians, crossing the deadly crevasse from PhD research into mathcore, skate punk, breakcore, beatbox champions, hip hop producers, bass music sound designers, robot bands, and more. Their musicianship has since deteriorated, and they are embarrassingly out of practice. CJ is now head of audio research at Harmonai, training audio diffusion models on StabilityAI's 4000+ GPU cluster, building state-of-the-art generative music tools for creators. And Zack is head of machine learning at Pex building music fingerprinting services to help enforce copyright. Together they are a dadaist paradox driven by satire & mischief.
Abstract
Musicians who command Machine Learning are paving the way to new sounds, new instruments, new workflows, and new challenges to the throne of human creativity. And it's really fun. We'll bring you on our journey from NeurIPS to Vice: the re-animation of Kurt Cobain, the black metal turing test, the Sinatra copyright strike, human vs ai beatbox battles, 24/7 generative livestreams, a VST plugin, and more. We'll show you what it's like to be musician-coders wielding neural nets as our instrument, collaborating with our favorite bands, building tools, running an R&D lab, and advising PhD research. We'll talk about raw audio neural synthesis from RNNs to diffusion models, and give previews from our latest projects.