SmartGuitarAmp icon indicating copy to clipboard operation
SmartGuitarAmp copied to clipboard

Modelling quality?

Open mishushakov opened this issue 5 years ago • 11 comments

Hey Keith, thanks for the VST!

i'm wondering how the parameters are implemented under the hood

with profiling (looking at you, Kemper) you have a snapshot of an AMP with set settings like Gain, Treble, Bass, Middle but the profiler itself can not accurately simulate the dynamics AMP at a higher or lower level of those settings - instead it applies post FX and EQ

with modelling you have a model of an AMP and when you tweak the settings it's more like an invisible hand is turning a knob on a "real" amp

which approach is used and do you have any ideas to improve the quality?

mishushakov avatar Oct 11 '20 12:10 mishushakov

Great question, it uses two snapshot models, one for clean and one for lead, taken at the most medium settings I could dial in on my amp. The EQ settings are applied from a simple 4 band EQ, and gain is just the signal level prior to the WaveNet model. Master volume is the level after the model.

I think improving the sound would mainly involve a more stringent process of recording the training samples. The EQ could be more advanced as well. I thought of the possibility of using multiple models for each setting, and switching between them as the knobs are turned, but have not experimented with that yet.

GuitarML avatar Oct 11 '20 12:10 GuitarML

cool i think maybe you could add parameters to train the model?

my idea is to allow you to record different samples at different settings, feed them into the model so it sees the differences/dynamics and then once you turn a knob it will use that data to guess what the reference AMP would've sounded like

the dataset would consist of sounds + a database of knob positions used in each sound

?

mishushakov avatar Oct 11 '20 13:10 mishushakov

Yes, I think we’re talking about the same thing, maybe just a slightly different way to implement it. I’d like to get around to testing that out, I’d be curious to see if the current implementation could change WaveNet models smooth enough to be feasible.

GuitarML avatar Oct 12 '20 12:10 GuitarML

maybe solve that by preloading/caching all required models?

for now i suggest to focus on getting your amp sound great remember the original title? it was about getting the software to sound like a 600 bucks then you could upload a demo with modelled amp compared to the original, so people who never listened to it could (or better, could not) hear the difference!

on the feedback side, i see the future of this in embeddable electronics as a bedroom musician i can not justify spending thousands on profilers this would allow me to basically build my own amp using off-shelf components

congrats on 500 stars, well deserved 👍

mishushakov avatar Oct 12 '20 14:10 mishushakov

Is this "trained" ala Deep Fake, with a set of target examples?

How loud was the amp? Would it behave differently if you made an example set where everything goes into feedback? Would it yield a feedback effect in the model?

Thanks for being the first to actually do this....

chipmcdonald avatar Oct 12 '20 15:10 chipmcdonald

mishushakov, Thank you, and I appreciate the feedback. I really didn’t do much to build this, just saw an opportunity to make something fun. The real time WaveNet algorithm was already developed, and the pytorch training was already out there, just a matter of making the two compatible with each other, and slapping on a nice looking GUI. I had a lot of help getting there too.

The 600 dollar tag was mainly to grab attention while still being “technically” correct, but if I could get my hands on a really nice amp there would be a lot of room for improvement. Unfortunately in this climate no one is renting out amps..

GuitarML avatar Oct 12 '20 15:10 GuitarML

chipmcdonald, I guess you could think of this as a deep fake, and yes the resultant model will depend on the training data, which is a pre amp signal and a post amp signal (in this case from a microphone). The recordings I used are about 4 minutes long. If you include something like feedback in the recording I expect it would come out in the model, but haven’t experimented with something like that to say for sure.

GuitarML avatar Oct 12 '20 15:10 GuitarML

For the samples/data are you trying to give equal representation of the sound/amp across all notes, a limited range, or... "?" I presume I'm coming from a preconceived notion having made impulse responses, etc., coming from a spectral/frequency domain perspective. The cognitive leap from what goes in > what comes out is abstruse to me at this point.

chipmcdonald avatar Oct 16 '20 14:10 chipmcdonald

@chipmcdonald https://en.wikipedia.org/wiki/Black_box

mishushakov avatar Oct 16 '20 14:10 mishushakov

Wow this project really took flight! Congrats @keyth72 for over 600 stars now!

Regarding models with controls, I think it's better to use control inputs similarly to how speech synthesis models treat input acoustic features. Then you would have a single model that can do all the controls. Check https://github.com/r9y9/wavenet_vocoder for reference.

Having a collection of snapshot models is kind of clunky and it's not straightforward to interpolate between those in the model parameter space.

ljuvela avatar Oct 20 '20 13:10 ljuvela

Checkout related projects: https://github.com/olegkapitonov/Kapitonov-Plugins-Pack by @olegkapitonov https://github.com/resonantdsp/swankyamp by @resonantdsp https://github.com/csteinmetz1/ronn by @csteinmetz1

thanks

mishushakov avatar Nov 04 '20 12:11 mishushakov

Doing some issue cleanup, I think this one has run it's course, thanks all for the great contributions! Fun to look back on this and see how far it's come. Closing issue

GuitarML avatar Apr 11 '23 18:04 GuitarML

Crazy how quick the time passed. Almost 3 years 😜

mishushakov avatar Apr 11 '23 19:04 mishushakov