xiabingquan

Results 12 comments of xiabingquan

> Did you tune the learning rate etc.? Also, I suggest you (virtually) increase the batch size through `accum_grad ` https://github.com/espnet/espnet/blob/master/egs2/librispeech/asr1/conf/tuning/train_lm_transformer2.yaml#L17 I didn't change anything except for the `batch_bins`. Okay,...

> Did you tune the learning rate etc.? Also, I suggest you (virtually) increase the batch size through `accum_grad ` https://github.com/espnet/espnet/blob/master/egs2/librispeech/asr1/conf/tuning/train_lm_transformer2.yaml#L17 I turned off the `accum_grad` and changed learning rate...

> > I turned off the `accum_grad` > > `accum_grad` is to increase the batch size practically to make your trial similar to the large GPU memory trial. You said...

I got another question. Will the learning rate scales with the number of devices(which is a quite common operation)? For example, the learning rate in the configuration file is 1e-4,...

Exactly, otherwise `dump_km_label.py` throws an error. Thanks a lot.

This could be solved by changing ``` dataloader = DataLoader( dataset=VallinaDataset(), batch_size=None, shuffle=False, batch_sampler=None, sampler=None, drop_last=False, collate_fn=None, pin_memory=True, num_workers=2 ) ``` to ``` dset = VallinaDataset() dataloader = DataLoader( dataset=dset,...

Thanks for your reply @pacman100 . Unluckily, it didn't resolve the issue :( Another error occurred. The full trackback is as follows: ``` โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ...

ๅŒๆ„๏ผŒๅธŒๆœ›ๅฏไปฅๆœ‰ready-to-run็š„ๆจกๅž‹

> Just hit this, thank you for the information, you helped me figure out how to compile. Much appreciated. You're welcome. This bug did bother me for a while. Glad...