piper icon indicating copy to clipboard operation
piper copied to clipboard

piper_train run failed on RPI5

Open ciaotesla opened this issue 1 year ago • 0 comments

I could run the command but I get these errors:

`python3 -m piper_train --dataset-dir /home/pi5/tts/piper-recording-studio/output-train --accelerator 'cpu' --batch-size 16 --validation-split 0.0 --num-test-example 0 --max_epochs 10000 --resume_from_checkpoint /home/pi5/tts/piper-recording-studio/output-train/epoch=2218-step=838782.ckpt --checkpoint-epochs 1 --precision bf16 --quality high DEBUG:piper_train:Namespace(dataset_dir='/home/pi5/tts/piper-recording-studio/output-train', checkpoint_epochs=1, quality='high'

_, resume_from_single_speaker_checkpoint=None, logger=True, enable_checkpointing=True, default_root_dir=None, gradient_clip_val=None, gradient_clip_algorithm=None, num_nodes=1, num_processes=None, devices=None, gpus=None, auto_select_gpus=False, tpu_cores=None, ipus=None, enable_progress_bar=True, overfit_batches=0.0, track_grad_norm=-1, check_val_every_n_epoch=1, fast_dev_run=False, accumulate_grad_batches=None, max_epochs=10000, min_epochs=None, max_steps=-1, min_steps=None, max_time=None, limit_train_batches=None, limit_val_batches=None, limit_test_batches=None, limit_predict_batches=None, val_check_interval=None, log_every_n_steps=50, accelerator='cpu', strategy=None, sync_batchnorm=False, precision='bf16', enable_model_summary=True, weights_save_path=None, num_sanity_val_steps=2, resume_from_checkpoint='/home/pi5/tts/piper-recording-studio/output-train/epoch=2218-step=838782.ckpt', profiler=None, benchmark=None, deterministic=None, reload_dataloaders_every_n_epochs=0, auto_lr_find=False, replace_sampler_ddp=True, detect_anomaly=False, auto_scale_batch_size=False, plugins=None, amp_backend='native', amp_level=None, move_metrics_to_cpu=False, multiple_trainloader_mode='max_size_cycle', batch_size=16, validation_split=0.0, num_test_examples=0, max_phoneme_ids=None, hidden_channels=192, inter_channels=192, filter_channels=768, n_layers=6, n_heads=2, seed=1234) Using bfloat16 Automatic Mixed Precision (AMP) /home/pi5/tts/.venv/lib/python3.11/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py:52: LightningDeprecationWarning: Setting Trainer(resume_from_checkpoint=)is deprecated in v1.5 and will be removed in v1.7. Please passTrainer.fit(ckpt_path=)directly instead. rank_zero_deprecation( GPU available: False, used: False TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs DEBUG:piper_train:Checkpoints will be saved every 1 epoch(s) /home/pi5/tts/.venv/lib/python3.11/site-packages/torch/nn/utils/weight_norm.py:28: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm. warnings.warn("torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.") DEBUG:vits.dataset:Loading dataset: /home/pi5/tts/piper-recording-studio/output-train/dataset.jsonl /home/pi5/tts/.venv/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py:731: LightningDeprecationWarning:trainer.resume_from_checkpointis deprecated in v1.5 and will be removed in v2.0. Specify the fit checkpoint path withtrainer.fit(ckpt_path=)instead. ckpt_path = ckpt_path or self.resume_from_checkpoint Missing logger folder: /home/pi5/tts/piper-recording-studio/output-train/lightning_logs Restoring states from the checkpoint path at /home/pi5/tts/piper-recording-studio/output-train/epoch=2218-step=838782.ckpt DEBUG:fsspec.local:open file: /home/pi5/tts/piper-recording-studio/output-train/epoch=2218-step=838782.ckpt /home/pi5/tts/.venv/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py:1659: UserWarning: Be aware that when usingckpt_path, callbacks used to create the checkpoint need to be provided during Trainerinstantiation. Please add the following callbacks: ["ModelCheckpoint{'monitor': None, 'mode': 'min', 'every_n_train_steps': 0, 'every_n_epochs': 1, 'train_time_interval': None}"]. rank_zero_warn( Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "/home/pi5/tts/piper/src/python/piper_train/main.py", line 147, in main() File "/home/pi5/tts/piper/src/python/piper_train/main.py", line 124, in main trainer.fit(model) File "/home/pi5/tts/.venv/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 696, in fit self._call_and_handle_interrupt( File "/home/pi5/tts/.venv/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/pi5/tts/.venv/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in _fit_impl results = self._run(model, ckpt_path=self.ckpt_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/pi5/tts/.venv/lib/python3.11/site-packages/pytorch_lightning/trainer/trainer.py", line 1147, in _run self.strategy.setup(self) File "/home/pi5/tts/.venv/lib/python3.11/site-packages/pytorch_lightning/strategies/single_device.py", line 74, in setup super().setup(trainer) File "/home/pi5/tts/.venv/lib/python3.11/site-packages/pytorch_lightning/strategies/strategy.py", line 153, in setup self.setup_optimizers(trainer) File "/home/pi5/tts/.venv/lib/python3.11/site-packages/pytorch_lightning/strategies/strategy.py", line 141, in setup_optimizers self.optimizers, self.lr_scheduler_configs, self.optimizer_frequencies = _init_optimizers_and_lr_schedulers( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/pi5/tts/.venv/lib/python3.11/site-packages/pytorch_lightning/core/optimizer.py", line 194, in _init_optimizers_and_lr_schedulers _validate_scheduler_api(lr_scheduler_configs, model) File "/home/pi5/tts/.venv/lib/python3.11/site-packages/pytorch_lightning/core/optimizer.py", line 351, in validate_scheduler_api raise MisconfigurationException( pytorch_lightning.utilities.exceptions.MisconfigurationException: The provided lr schedulerExponentialLRdoesn't follow PyTorch's LRScheduler API. You should override theLightningModule.lr_scheduler_stephook with your own logic if you are using a custom LR scheduler.`

ciaotesla avatar Feb 16 '24 23:02 ciaotesla