Daniel Stoller

Results 36 comments of Daniel Stoller

A couple of notes on this: 1. CPU is much slower than GPU and this is to be expected, since the required operations run much faster on the GPU. So...

The part where prediction for an input song is made is actually here: https://github.com/f90/Wave-U-Net/blob/master/Evaluate.py#L109 What could be changed without a lot of effort would be to change batch size from...

> So have a `batch_size` of 16 already. I have tried to change `num_workers` to 12, but the processing time it's the same (CPU): This is expected. ``num_workers`` is just...

There would also be the issue when implementing support for any ``batch_size`` for prediction that the best value is the largest one that still does not lead your particular GPU/RAM...

> @f90 ok just realized that this is hardcoded here > > ```python > # Batch size of 1 > sep_input_shape[0] = 1 > sep_output_shape[0] = 1 > > mix_context,...

OK so i looked into this a bit more, I implemented a batched variant of prediction and compared running times for a 3 minute input piece. Results: GPU (1x GTX1080)...

I am curious why you expect any improvements with the batched version. But if you want to experiment with it, replace the ``predict_track`` function with this version of it in...

Going to close this issue soon if I don't get any reports on the above code snippet bringing much benefit in terms of prediction speed...

Multi-GPU is definitely an interesting option. I would like to establish this repository as a "go-to" resource for people learning about deep learning for source separation, so I would like...

Hey, this looks like a typical error if the CUDA libraries are not included in your environment properly. Please refer to the CUDA installation manual and how to setup CUDA...