nuwave icon indicating copy to clipboard operation
nuwave copied to clipboard

robustness issue: use of methods marked for deprecation

Open xvdp opened this issue 3 years ago • 0 comments

Hi,

  1. Do you have pretrained models - I don't see them linked in the github ? It would be great to have those

  2. So, to test your model I'm retraining, I noticed a couple easy fixes that would make this robust to current libraries. librosa 0.9 and pytorch-lightning 1.4 -- I get it that you put older libraries librosa 0.8 and pytorch-ligthning 1.1.6 in the requirements, yet the 'fixes' were already marked for deprecation and having the environmnet already built I didnt want to grab older libraries. So, for your consideration only, you may want to keep the old code but it doesnt work for me. I forked and while I don't know if all processes are being correctly run it seems to be training alright.

file: nuwave/utils/wav2pt.py on librosa 0.9.0 effects.trim() requires kwargs for all but the first argument; minimal change

rosa.effects.trim(y, top_db=15)   

file: nuwave/trainer.py pytorch-lightning has the terrible habit of deprecating and renaming; I think these changes should work in the older version as well as they were already slated for deprecation. From the CHANGELOG (#5321) Removed deprecated checkpoint argument filepath Use dirpath + filename instead (#6162) Removed deprecated ModelCheckpoint arguments prefix, mode="auto"

    checkpoint_callback = ModelCheckpoint(dirpath=hparams.log.checkpoint_dir,
                                          filename=ckpt_path,
                                          verbose=True,
                                          save_last=True,
                                          save_top_k=3,
                                          monitor='val_loss',
                                          mode='min')

Trainer() class does not accept checkpoint_callback kwarg. (#9754) Deprecate checkpoint_callback from the Trainer constructor in favour of enable_checkpointing

    trainer = Trainer(
        checkpoint_callback=True,
        gpus=hparams.train.gpus,
        accelerator='ddp' if hparams.train.gpus > 1 else None,
        #plugins='ddp_sharded',
        amp_backend='apex',  #
        amp_level='O2',  #
        #num_sanity_val_steps = -1,
        check_val_every_n_epoch=2,
        gradient_clip_val = 0.5,
        max_epochs=200000,
        logger=tblogger,
        progress_bar_refresh_rate=4,
        callbacks=[
            EMACallback(os.path.join(hparams.log.checkpoint_dir,
                        f'{hparams.name}_epoch={{epoch}}_EMA')),
                        checkpoint_callback
                  ],
        resume_from_checkpoint=None
        if args.resume_from == None or args.restart else sorted(
            glob(
                os.path.join(hparams.log.checkpoint_dir,
                             f'*_epoch={args.resume_from}.ckpt')))[-1])

(#11578) Deprecated Callback.on_epoch_end hook in favour of Callback.on_{train/val/test}_epoch_end

    @rank_zero_only
    def on_train_epoch_end(self, trainer, pl_module):
        self.queue.append(trainer.current_epoch)
        ...

xvdp avatar Feb 17 '22 03:02 xvdp