icefall
icefall copied to clipboard
Is it possible to do reverberation on the fly?
Hi K2 team, I know we can do reverberation in data preparation phase. But is it possible to do that on the fly during training, so that the data augmentation strategy would be various for different epochs?
Please see https://github.com/k2-fsa/icefall/blob/ed6bc200e37aaea0129ae32095642c096d4ffad5/egs/yesno/ASR/tdnn/asr_datamodule.py#L170-L187
You need to
- https://github.com/k2-fsa/icefall/blob/ed6bc200e37aaea0129ae32095642c096d4ffad5/egs/yesno/ASR/tdnn/asr_datamodule.py#L114
Pass --on-the-fly-feats=true
to train.py
- Uncomment https://github.com/k2-fsa/icefall/blob/ed6bc200e37aaea0129ae32095642c096d4ffad5/egs/yesno/ASR/tdnn/asr_datamodule.py#L179
yes it’s doable, let me check the doc and return to you later. bestjinOn 15 Apr 2024, at 11:31, Shenquan Zhang @.***> wrote: Hi K2 team, I know we can do reverberation in data preparation phase. But is it possible to do that on the fly during training, so that the data augmentation strategy would be various for different epochs?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>
Ah, I meant the reverberation with impulse response, not the speed perturb. Thank you @JinZr , please share your doc.
I tried to add rir into the first place in transforms
, like this:
transforms.append(
ReverbWithImpulseResponse(p=0.5)
)
But got an error:
-- Process 3 terminated with the following error: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 69, in _wrap fn(i, *args) File "/icefall/egs/easy_start/ASR/zipformer/train.py", line 1265, in run train_one_epoch( File "/icefall/egs/easy_start/ASR/zipformer/train.py", line 941, in train_one_epoch for batch_idx, batch in enumerate(train_dl): File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 442, in __iter__ return self._get_iterator() File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 388, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1043, in __init__ w.start() File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib/python3.10/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/context.py", line 288, in _Popen return Popen(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib/python3.10/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/usr/lib/python3.10/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) TypeError: cannot pickle 'module' object
Not sure why?
Looks like a bug in Lhotse, will fix. You can probably solve this by setting env var LHOTSE_DILL_ENABLED=1 or using the cuts = cuts.reverb_rir() API.
@pzelasko Thanks for your replying. I tried LHOTSE_DILL_ENABLED=1. Looks like it works, but it takes about 10min to train 50 batches. While I added RIR and MUSAN noise at the same time, but it still takes too much time. What do you think?
Was it faster without RIR or MUSAN? What’s the number of data loading workers and max duration?
This is indeed a bug in Lhotse. When using PerturbSpeed for transforms, the imported random module is stored as a member variable. By using the functionality of the random module without storing it as a member variable, the pickle-related errors should be resolved. The same issue is observed in ReverbWithImpulseResponse as with PerturbSpeed. Commenting out the part related to self.random = random and directly using the random module's functionality should resolve the problem.
Thanks for debugging this. I committed the relevant fixes in https://github.com/lhotse-speech/lhotse/pull/1355