Keith Bloemer
Keith Bloemer
@mishushakov Can't wait to try it out! Nice work!
I think so, but for the plugin I was planning on using json format for the models anyway. H5 is just another data format, as long as all the weights...
The pre emphasis filter looks correct, but I'm not getting the same training result as the original method of splitting the data without the data loader class. I tried with...
I think part of the problem is that the LSTM model I made isn't what they developed in the paper. They use a single input, single output stateful LSTM model,...
I would be interested to see if using the stateful LSTM -> dense layers from the papers with your new code would work. I tried to get the stateful lstms...
That's the same thing I get with the old code and the TS9 example. Here is one (also TS9 sample) using the the old code and loss=mse, with the lowered...
I ran some more tests using mse and mae for loss, this time by creating models for SmartAmpPro. When using the same settings (3 epochs, 24 hidden units) they sound...
Here are a few models using mse for loss, and with higher epochs (30 - 50). See the new colab script I added to the SmartAmpPro repo (train_colab_mse.ipynb). [loss_test_mse.zip](https://github.com/GuitarML/GuitarLSTM/files/6018916/loss_test_mse.zip)
@38github I ran a few initial tests with two hidden stateful LSTM layers followed by a dense layer, but wasn't able to get it to converge. It could be something...
@Alec-Wright thanks for the response! When I tried using a stateful LSTM layer in keras the loss never went below about 0.9. I didn’t spend too much time on it,...