Skip to main content
SearchLoginLogin or Signup

Deploying NN-Based Generative Models of Symbolic Music as VST3 Plugins using NeuralMidiFx Wrapper

Tutorial Proposal for AIMC 2023

Published onAug 29, 2023
Deploying NN-Based Generative Models of Symbolic Music as VST3 Plugins using NeuralMidiFx Wrapper
·

Abstract

We have developed a JUCE-based VST3 wrapper/template, called NeuralMidiFx, that streamlines the deployment of AI-based generative models of symbolic music. This template is intended for researchers with little-to-no prior experience of plugin development. In this tutorial, we will guide the attendees through the deployment process using NeuralMidiFx. During this session, we will use a pre-trained model to build a VST3 plugin from scratch that deploys the generative model.

1. Description

To properly research, develop, and evaluate AI-based generative music systems that focus on performance or composition, it is crucial to have elaborate and intelligible user-system interactions. Unfortunately, deploying systems that are easy-to-use and easy-to-understand by non-technical users is extremely costly and time-consuming, making it difficult for researchers with limited resources to properly engage users with their systems.

To address this issue, we have developed a wrapper/template called NeuralMidiFx, which streamlines the deployment of NN-based symbolic generative systems as VST3 plugins. This wrapper requires minimal familiarity with plugin development and allows for the development of fully fledged plugins. NeuralMidiFx is developed such that the deployment tasks are clearly divided into two groups: (1) model related tasks, and (2) VST development related tasks . In this context, any model-related procedures are handled by the researcher while all plugin-specific technical procedures are handled by the wrapper (see Figure 1).

Figure 1 - Architecture of the plugin - white boxes tagged with a red dot require additional implementation by the developer while all other threads/communication means are readily available and don't need any interaction from the developer.

In this tutorial, we will provide a step-by-step guide of the deployment process using a pre-trained model: a variational auto-encoder (VAE) trained for drum generation and real-time instrument accompaniment. This model will be deployed in two contexts: (1) a non-real-time drum loop generation system, and (2) a real-time drum accompaniment system. (Note: while in this session, we focus the discussions around drum generation, all the discussed concepts can be used to deploy any symbolic music generator).

The tutorial is intended for researchers interested in deploying/distributing their generative systems. No prior experience with plugin development and deployment frameworks such as JUCE is required to follow the session, however, some familiarity with C++ programming language will be helpful (while not necessary).

2. Organizers

Name

Role

Affiliation

Email

Behzad Haki

Researcher

Universitat Pompeu Fabra (UPF)

[email protected]

Julian Lenz

Researcher

[email protected]

Sergi Jorda

Supervisor

[email protected]

3. Tentative Plan

We envision that the session will be 4 hours long. Below is the tentative plan for the session:

  • [15 min] Overview of NeuralMidiFx

  • [15 min] Preparation of models for deployment

  • [15 min] Automated generation of graphical interfaces

  • [45-60 min] Input Preparation for Inference

  • [45-60 min] Inference using the model

  • [45-60 min] Preparing generations for playback

  • [15 min] Final Remarks

  • [30 min] Q/A

4. Hybrid Possibility

There is a possibility to conduct this session in a hybrid format. Moreover, we intend to record and share the session publicly (if the conference regulations permit)

5. Technical Requirements

We will need a room to accommodate 10-15 attendees. For us, this will be the ideal number of attendees, however, if there is higher number of requests, we will be able to adapt the session.

Lastly, we will need a projector as well as an external monitor.

6. Links

Paper

https://aimc2023.pubpub.org/pub/givwzz98/draft?access=mxqkwrij

Repository

https://github.com/behzadhaki/NeuralMidiFXPlugin.git

Challenges of Deployment as VST

Challenges of Deploying Symbolic NN-based Models of Music As VST Plugins

7. Demos

Tutorial 1: Introduction to NeuralMidiFx
Tutorial 2: Parameters and GUI Generation
Tutorial 3: InputTensorPreparator Thread (ITP)
Tutorial 4: Model Thread (MDL)
Tutorial 5: PlaybackPreparatorThread (PPP)
Comments
0
comment
No comments here
Why not start the discussion?