rnnoise icon indicating copy to clipboard operation
rnnoise copied to clipboard

Training

Open roy-bankesh opened this issue 5 years ago • 10 comments

Hi, I have some general question regarding training the module:

  1. Can I train the module in GPU ?If yes, what will be the procedure ?

  2. Can I train a new data-set along with your pre trained module or I have to train the module with all the data-set again ?

  3. Can This module can be run at Nvidia Jetson embedded devices?? What will be the performance ?

roy-bankesh avatar Feb 17 '20 15:02 roy-bankesh

  1. Yes it can. See Tensorflow GPU Guide. You probably want to Upgrade to Tensorflow 2.1.0 to get better performance out of the GPU (since Tensorflow 2.0.0 the optimized Nvidia implementation is used automatically).

  2. Yes, you can. Use the tool provided in #79 to create the keras model from the c sources. Modify rnn_train.py to load the trained model.

  3. Probably not. You need a lot of RAM and because of the deep backpropagation caused by a big sequence size, CPU tend to outperform small graphic cards on this task.

Zadagu avatar Feb 17 '20 16:02 Zadagu

@Zadagu This is the Specification of Nvidia Jetson Nano:

GPU | NVIDIA Maxwell™ architecture with 128 NVIDIA CUDA® cores0.5 TFLOPs (FP16) CPU | Quad-core ARM® Cortex®-A57 MPCore processor Memory | 4 GB 64-bit LPDDR41600MHz - 25.6 GB/s

How much ram needed to inference the module ??

roy-bankesh avatar Feb 18 '20 07:02 roy-bankesh

@Zadagu I want to train the module which is trained with various noises i want to add some datasets and i want to keep the previous one also is that possible?

roy-bankesh avatar Feb 18 '20 07:02 roy-bankesh

After some modification of the training script my PC consumes about 20GB of RAM during training. If you use the script as it is, you will need around 35GB.

Zadagu avatar Feb 18 '20 14:02 Zadagu

Training will be done in other environment. I want to implement the library for my application to work with Nvidia Jetson Devices.

roy-bankesh avatar Feb 19 '20 05:02 roy-bankesh

RNNoise is very lightweight in execution. But you should note, that the RNNoise Binary doesn't use GPU acceleration at all. See the RNNoise paper for complexity analysis https://jmvalin.ca/papers/rnnoise_mmsp2018.pdf

Zadagu avatar Feb 19 '20 10:02 Zadagu

@Zadagu Thanks for the information.

roy-bankesh avatar Feb 19 '20 10:02 roy-bankesh

@Zadagu There are two files with same name as "rnn_train.py". One is ithe src directory another is in training directory. Which one to be replaced with keras code? Can you help me with this ??

roy-bankesh avatar Feb 19 '20 12:02 roy-bankesh

RNNoise is very lightweight in execution

It's not. Max complexity Opus encoding is 2-3 times faster on arm, and I've seen the core load dangerously close to 100% on low-end Android phones because of that, not to mention the battery impact. I'd love to see it optimized, at this point I'm even considering to land the existing trained network on some highly efficient C/C++ NN framework to see if I can make it run faster

witaly-iwanow avatar Feb 26 '20 06:02 witaly-iwanow

RNNoise is very lightweight in execution

I should make this more clear. I meant that the RNNoise Algorithm is in comparison to other NN based denoising algortithms lightweight in computational complexity during execution.

Zadagu avatar Feb 26 '20 13:02 Zadagu