RenderMan icon indicating copy to clipboard operation
RenderMan copied to clipboard

API: how to send audio (input) to VST?

Open drscotthawley opened this issue 7 years ago • 7 comments

I downloaded a free compressor plugin and started modifying your example of the Dexed synth to use the compressor, and I could query all the parameters and their names, but then...

...I've been all through the code and docs, and I still can't figure it out: How does one send audio into the VST plugin?

I see several "get" routines in the source for RenderEngine.... but for a plugin like an echo or compressor ...how do I "put"?

Thanks!

(little screenshot of how far I got, LOL) screen shot 2018-04-27 at 10 14 38 pm

drscotthawley avatar Apr 28 '18 02:04 drscotthawley

I see the problem - so with the current VST you are using, you want to send buffers of audio in at get altered FX audio out, rather than triggering a MIDI synth to subsequently generate audio frames?

This VST host was designed in my Undergrad Dissertation to host a synth and create one-shot sounds from various synthesiser patches. This was so I 'learn' an automatic VST synthesiser programmer, by training a neural network representation between the MFCCs features (derived from the sound) and the parameters used to make the said sound.

Although it's been a year since I have looked at the source, I suspect the code would need to be modified in RenderEngine::renderPatch.

Lines 121, 122 of RenderEngine.cpp show the audioBuffer being passed to the plugin as a reference. In this case, we would want to fill the audioBuffer object with data before it goes into the plugin. I could be wrong - it's been a while since I worked with JUCE, but that is certainly where I would start:

// Turn Midi to audio via the vst.
plugin->processBlock (audioBuffer, midiNoteBuffer);

I have about 20 days left on my thesis, and as I said I will be reviving RenderMan for a creative ML course I'll be doing. Until then my hands are tied! Let me know if I can point you in the right direction, and if not, I'll add this to the list of features to be implemented.

Thanks for your perserverance!

fedden avatar Apr 28 '18 09:04 fedden

In that case, I have a language suggestion: change references to "VST host" to "VSTi host."
Because that's what you've got. https://www.quora.com/What-is-the-difference-between-VST-and-VSTi ...It would help keep people like me from getting too excited.

Thanks for pointing out the place to start in the code. If I can convince a couple local JUCE experts to help, maybe we can add audio and send a PR. How 'bout we leave this issue open, and maybe someone else in the world will contribute!

Aside: Good luck with your thesis! Sounds interesting. I'm working on deep learning as well, only with audio. And I'll be in London in late June for a couple AI conferences. I'd love to visit Goldsmiths while I'm around. I took Rebecca Fiebrink's online course recently and loved it.

drscotthawley avatar Apr 28 '18 14:04 drscotthawley

Apologies if I wasted your time and the references to VST have been changed so thanks for pointing that out. Rebecca is well worth meeting if you get the chance - one of the standout lecturers for me by far!

Contributions are very welcome but the potential to train neural networks for automatic mixing / FX is an entincing one so I'll see what can be done in the coming months. I should add the VSTi programming project has already accepted to IEEE as a paper. My dissertation this year is focused on neural audio synthesis at high sample rates and in real-time usage! :)

fedden avatar Apr 28 '18 15:04 fedden

Hi Scott,

I had the same excitement till a figured out that renderPatch function is applied only to virtual instruments. Also, the librenderman has feature extraction process that is a lit bit more complex than my needs. Those reasons encourage me to build a very simple audio-to-audio vst host interface, you can checkout in https://github.com/igorgad/dpm under vstRender folder.

I also decided to replace the boost interface with swig. The bad news is that it is still not working due to problems with the swig interface ;/

On Sat, Apr 28, 2018 at 12:32 PM, Leon Fedden [email protected] wrote:

Apologies if I wasted your time - not my intention - and the references to VST have been changed so thanks for pointing that out. Rebecca is well worth meeting if you get the chance - one of the standout lecturers for me by far!

Contributions are very welcome but the potential to train neural networks for automatic mixing / FX is an entincing one so I'll see what can be done in the coming months.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/fedden/RenderMan/issues/9#issuecomment-385184439, or mute the thread https://github.com/notifications/unsubscribe-auth/AFiAyB_0KuDkHrkQ6k2ZPZP-H60rHxCkks5ttIuJgaJpZM4TrNuc .

igorgad avatar Apr 28 '18 18:04 igorgad

@fedden No worries; I'd just been wanting an audio-audio python VST host for a while. That's great about your paper being accepted!

@igorgad Great to hear about your project. I pulled your repo, built it, and will see if I can help. Currently getting an error that seems unrelated to swig. I'll send you an Issue...

drscotthawley avatar Apr 28 '18 19:04 drscotthawley

@drscotthawley did you find a way to handle audio->VST from python?

faroit avatar Mar 10 '20 21:03 faroit

@faroit It's been a while, but yes, we had something working once: Check out @igorgad's "dpm" repo, e.g.

https://github.com/igorgad/dpm/blob/master/contrib/run_plugin.py

I keep meaning to come back to this, but so many other things to work on! Let me know if this helps and/or if you make progress with it.

drscotthawley avatar Mar 11 '20 04:03 drscotthawley