rnnoise
rnnoise copied to clipboard
Training
Hi, I have some general question regarding training the module:
-
Can I train the module in GPU ?If yes, what will be the procedure ?
-
Can I train a new data-set along with your pre trained module or I have to train the module with all the data-set again ?
-
Can This module can be run at Nvidia Jetson embedded devices?? What will be the performance ?
-
Yes it can. See Tensorflow GPU Guide. You probably want to Upgrade to Tensorflow 2.1.0 to get better performance out of the GPU (since Tensorflow 2.0.0 the optimized Nvidia implementation is used automatically).
-
Yes, you can. Use the tool provided in #79 to create the keras model from the c sources. Modify rnn_train.py to load the trained model.
-
Probably not. You need a lot of RAM and because of the deep backpropagation caused by a big sequence size, CPU tend to outperform small graphic cards on this task.
@Zadagu This is the Specification of Nvidia Jetson Nano:
GPU | NVIDIA Maxwell™ architecture with 128 NVIDIA CUDA® cores0.5 TFLOPs (FP16) CPU | Quad-core ARM® Cortex®-A57 MPCore processor Memory | 4 GB 64-bit LPDDR41600MHz - 25.6 GB/s
How much ram needed to inference the module ??
@Zadagu I want to train the module which is trained with various noises i want to add some datasets and i want to keep the previous one also is that possible?
After some modification of the training script my PC consumes about 20GB of RAM during training. If you use the script as it is, you will need around 35GB.
Training will be done in other environment. I want to implement the library for my application to work with Nvidia Jetson Devices.
RNNoise is very lightweight in execution. But you should note, that the RNNoise Binary doesn't use GPU acceleration at all. See the RNNoise paper for complexity analysis https://jmvalin.ca/papers/rnnoise_mmsp2018.pdf
@Zadagu Thanks for the information.
@Zadagu There are two files with same name as "rnn_train.py". One is ithe src directory another is in training directory. Which one to be replaced with keras code? Can you help me with this ??
RNNoise is very lightweight in execution
It's not. Max complexity Opus encoding is 2-3 times faster on arm, and I've seen the core load dangerously close to 100% on low-end Android phones because of that, not to mention the battery impact. I'd love to see it optimized, at this point I'm even considering to land the existing trained network on some highly efficient C/C++ NN framework to see if I can make it run faster
RNNoise is very lightweight in execution
I should make this more clear. I meant that the RNNoise Algorithm is in comparison to other NN based denoising algortithms lightweight in computational complexity during execution.