RTNeural-Variant icon indicating copy to clipboard operation
RTNeural-Variant copied to clipboard

An example audio plugin using RTNeural with a variant model type

RTNeural-Variant

This repository contains an example audio plugin using RTNeural with a variant model type.

Building

To build from scratch, you must have CMake installed.

# Clone the repository
$ git clone https://github.com/Chowdhury-DSP/RTNeural-Variant.git
$ cd RTNeuralVariant

# build with CMake
$ cmake -Bbuild -DCMAKE_BUILD_TYPE=Release
$ cmake --build build --config Release

About

RTNeural contains two parallel APIs: a "static" API, for models with an architecture that is known at run-time, and a "dynamic" API which can run inference for models of any architecture that RTNeural supports. The static API has significantly better performance, however can be limiting in cases where the software may want to let users load a wide variety of models.

This example plugin demonstrates a workflow using RTNeural's static API in conjunction with std::variant, to allow users to load models within a fixed subset of model architectures. In this example, a set of recurrent networks with different architectures were generated with random weights (see python/model.py). Then a script was used to automatically generate a C++ header file defining a std::variant type, and a set of functions for creating the different model types inside the variant wrapper (see python/generate_variant_hpp.py and src/model_variant.hpp).

The plugin will open without any model having been loaded and the plugin will output silence From there, a user can use the "Load Model" button to load any of the models in the models directory.

License

RTNeural-Variant is open source, and is licensed under the BSD 3-clause license. Enjoy!