30th August 2023
note: there are concurrent workshops, please don’t register for workshops with conflicting schedules!
09:00 - 16:30 | Gardener Tower, ACCA
Physical and gestural musical instruments that take advantage of artificial intelligence and machine learning to explore instrumental agency are becoming more accessible due to the development of new tools and workflows specialised for mobility, portability, efficiency and low latency. This full-day, hands-on workshop will provide all of these tools to participants along with support from their creators. First, a series of tutorials will introduce hardware sensor and actuator elements and connective software for neural mapping, synthesis and music generation. Participants will design their own instrument assemblages from these parts, and finally spend some time demoing and improvising with them. |
09:00 - 13:00 | ACCA Seminar Room & Online
We have developed a JUCE-based VST3 wrapper/template, called NeuralMidiFx, that streamlines the deployment of AI-based generative models of symbolic music. This template is intended for researchers with little-to-no prior experience of plugin development. In this tutorial, we will guide the attendees through the deployment process using NeuralMidiFx. During this session, we will use a pre-trained model to build a VST3 plugin from scratch that deploys the generative model. |
14:00 - 16:30 | ACCA Seminar Room
This speculative design workshop explores the use of generative AI tools for musical instrument concept design. Workshop participants will use text-to-image AI tools to rapidly generate an abundance of instrument designs before refining designs into mock-ups to imagine how the instruments might be played and how they might sound. The designs produced in the workshop will be shared in a show and tell at the end of the workshop as well as a website and physical zine thereafter. The workshop to be interactive, social and hands-on. No prior experience of AI, design or musical instruments is required, all are welcome! |
14:00 - 16:30 | Sussex Humanities Lab
This two-hour tutorial introduces participants to Ai-generated (machine) folk music through practice in person. In the first 45 minutes, one Ai-generated folk tune will be taught by the organiser, and discussed with the attendees. The tune will first be performed. Then it will be taught gradually by repeating small phrases and combining them to form the parts. Participants should be comfortable with their musical instrument of choice and be able to learn by ear (but music notation will be provided). Following a 15 minute break, the next 45 minutes presents the state of the art in the modeling of folk music using machine learning. This includes a look at sequence modeling systems applying autoregression via recurrent neural networks (LSTM) and attention (transformers), as well as masked language modeling. |