by Koray Tahiroğlu, Shenran Wang, Eduard Mihai Tampu, and Jackie Lin
KT
SW
ET
JL
Published: Aug 29, 2023
This paper introduces a new course on deep learning with audio that aims to equip students with the necessary skills and knowledge to explore and practice the rapidly evolving AI music technologies.
This paper explores the use of Variational Autoencoders (VAEs) to produce latent spaces for tonal music representation and evaluates their effectiveness in defining cognitive distances from musical pitch.
The study uses online video observation to investigate patterns of interactivity. Relations of affordances, temporal constraints and liveness between code the music are visualized as a modular framework. The aim is for the reader to employ the map for interactive AI applications.