Conditioned-Source-Separation-LaSAFT
Conditioned-Source-Separation-LaSAFT copied to clipboard
A PyTorch implementation of the paper: "LaSAFT: Latent Source Attentive Frequency Transformation for Conditioned Source Separation" (ICASSP 2021)
Just a suggestion. Useful for those who not have local GPU (me for instance), the current process can be very long depending of the audio duration. Thank you very much.
``` from lasaft.source_separation.conditioned.cunet.models.dcun_tfc_gpocm_lasaft import DCUN_TFC_GPoCM_LaSAFT_Framework args = {} # FFT params args['n_fft'] = 4096 args['hop_length'] = 1024 args['num_frame'] = 128 # SVS Framework args['spec_type'] = 'complex' args['spec_est_mode'] = 'mapping' #...
Hi there! Thank you very much for open sourcing code and such a great paper, awesome results! I was wondering have you tried doing any pruning or quantization on the...
```separate_track```` does not have an explicit way to choose the device option - [ ] def separate_track (track, instrument, cuda=False) - [ ] def separate_track (track, instrument, cuda=False, batch_size=1)
The current ```separate_track``` function iterates small and disjoint pieces of the whole track. Overlapping sliding window like STFT might improve the separation quality.
Hi, I would like to log the experiments on local only, so I changed the logger to CSVLogger in the `lasaft/trainer.py`: ``` log = args['log'] if log == 'False': args['logger']...
not tested for evaluator.py
Thank you so much for this great model ! Wondeful job ! I have just a little question about the memory required for the separation. The model seem use a...