piper icon indicating copy to clipboard operation
piper copied to clipboard

ValueError: n must be at least one

Open ignacio82 opened this issue 1 year ago • 14 comments

I'm trying to follow this tutorial and I'm getting an error when trying to do the reprocessing:

(.venv) ignacio@xps:~/piper/src/python$ python3 -m piper_train.preprocess --input-dir ~/test-ignacio/ --output-dir ~/out-train --language en-US --sample-rate 22050 --dataset-format ljspeech --single-speaker
INFO:preprocess:Single speaker dataset
INFO:preprocess:Wrote dataset config
INFO:preprocess:Processing 16 utterance(s) with 12 worker(s)
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/ignacio/piper/src/python/piper_train/preprocess.py", line 502, in <module>
    main()
  File "/home/ignacio/piper/src/python/piper_train/preprocess.py", line 225, in main
    for utt_batch in batched(
  File "/home/ignacio/piper/src/python/piper_train/preprocess.py", line 491, in batched
    raise ValueError("n must be at least one")
ValueError: n must be at least one

I'm running ubuntu in case it is relevant. Any ideas for what am I doing wrong and how to fix it?

ignacio82 avatar Dec 04 '23 19:12 ignacio82

In your directory ~/test-ignacio/, you have a metadata.csv and wav directory? Did you try removing the --single-speaker option?

aaronnewsome avatar Dec 04 '23 21:12 aaronnewsome

Yes, no luck.

$ ls ~/test-ignacio/
metadata.csv  wav

(.venv) ignacio@xps:~/piper/src/python$ python3 -m piper_train.preprocess --input-dir ~/test-ignacio/ --output-dir ~/out-train --language en-US --sample-rate 22050 --dataset-format ljspeech
INFO:preprocess:Single speaker dataset
INFO:preprocess:Wrote dataset config
INFO:preprocess:Processing 16 utterance(s) with 12 worker(s)
Traceback (most recent call last):
  File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/home/ignacio/piper/src/python/piper_train/preprocess.py", line 502, in <module>
    main()
  File "/home/ignacio/piper/src/python/piper_train/preprocess.py", line 225, in main
    for utt_batch in batched(
  File "/home/ignacio/piper/src/python/piper_train/preprocess.py", line 491, in batched
    raise ValueError("n must be at least one")
ValueError: n must be at least one


ignacio82 avatar Dec 04 '23 23:12 ignacio82

The first line of your error output says:

A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.26.2

So I'm curious what version of SciPy you have. Did you setup the virtual environment using the requirements.txt in the piper/src/python directory?

In my venv, here's what I have for scipy, numpy:

pip list | grep "scipy|numpy" numpy 1.26.2 scipy 1.11.4

aaronnewsome avatar Dec 04 '23 23:12 aaronnewsome

You need Audios that are in total more than 5 minutes minimum then it should allow you to preprocess

FemBoxbrawl avatar Dec 05 '23 01:12 FemBoxbrawl

The first line of your error output says:

A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.26.2

So I'm curious what version of SciPy you have. Did you setup the virtual environment using the requirements.txt in the piper/src/python directory?

In my venv, here's what I have for scipy, numpy:

pip list | grep "scipy|numpy" numpy 1.26.2 scipy 1.11.4

Kinda off topic, but could you help me figure this out:

DEBUG:fsspec.local:open file: /home/user/piper/epoch=2164-step=1355540.ckpt /home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1659: UserWarning: Be aware that when using ckpt_path, callbacks used to create the checkpoint need to be provided during Trainer instantiation. Please add the following callbacks: ["ModelCheckpoint{'monitor': None, 'mode': 'min', 'every_n_train_steps': 0, 'every_n_epochs': 1, 'train_time_interval': None}"]. rank_zero_warn( LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] DEBUG:fsspec.local:open file: /home/user/piper/my-training/lightning_logs/version_2/hparams.yaml Restored all states from the checkpoint file at /home/user/piper/epoch=2164-step=1355540.ckpt /home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/utilities/data.py:153: UserWarning: Total length of DataLoader across ranks is zero. Please make sure this was your intention. rank_zero_warn( /home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:236: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the num_workers argument(try 16 which is the number of cpus on this machine) in theDataLoader` init to improve performance. rank_zero_warn( /home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1892: PossibleUserWarning: The number of training batches (5) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch. rank_zero_warn( Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/user/piper/src/python/piper_train/main.py", line 147, in main() File "/home/user/piper/src/python/piper_train/main.py", line 124, in main trainer.fit(model) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 696, in fit self._call_and_handle_interrupt( File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in _fit_impl results = self._run(model, ckpt_path=self.ckpt_path) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1166, in _run results = self._run_stage() File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1252, in _run_stage return self._run_train() File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1283, in _run_train self.fit_loop.run() File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run self.advance(*args, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 271, in advance self._outputs = self.epoch_loop.run(self._data_fetcher) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run self.advance(*args, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 203, in advance batch_output = self.batch_loop.run(kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run self.advance(*args, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 87, in advance outputs = self.optimizer_loop.run(optimizers, kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run self.advance(*args, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 201, in advance result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position]) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 248, in _run_optimization self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 358, in _optimizer_step self.trainer._call_lightning_module_hook( File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1550, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1705, in optimizer_step optimizer.step(closure=optimizer_closure) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 216, in optimizer_step return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 153, in optimizer_step return optimizer.step(closure=closure, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper return wrapped(*args, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 88, in wrapper return func(*args, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/torch/optim/adamw.py", line 100, in step loss = closure() File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 138, in _wrap_closure closure_result = closure() File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 146, in call self._result = self.closure(*args, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 132, in closure step_output = self._step_fn() File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 407, in _training_step training_step_output = self.trainer._call_strategy_hook("training_step", *kwargs.values()) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1704, in _call_strategy_hook output = fn(*args, **kwargs) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 358, in training_step return self.model.training_step(*args, **kwargs) File "/home/user/piper/src/python/piper_train/vits/lightning.py", line 191, in training_step return self.training_step_g(batch) File "/home/user/piper/src/python/piper_train/vits/lightning.py", line 214, in training_step_g ) = self.model_g(x, x_lengths, spec, spec_lengths, speaker_ids) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/user/piper/src/python/piper_train/vits/models.py", line 625, in forward z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/user/piper/src/python/piper_train/vits/models.py", line 292, in forward x = self.enc(x, x_mask, g=g) File "/home/user/piper/src/python/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/user/piper/src/python/piper_train/vits/modules.py", line 199, in forward acts = fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: nvrtc: error: invalid value for --gpu-architecture (-arch)

nvrtc compilation failed:

#define NAN __int_as_float(0x7fffffff) #define POS_INFINITY __int_as_float(0x7f800000) #define NEG_INFINITY __int_as_float(0xff800000)

template device T maximum(T a, T b) { return isnan(a) ? a : (a > b ? a : b); }

template device T minimum(T a, T b) { return isnan(a) ? a : (a < b ? a : b); }

extern "C" global void fused_tanh_sigmoid_mul(float* tv_, float* tv__, float* aten_mul, float* aten_sigmoid, float* aten_tanh) { { float tv___1 = ldg(tv + ((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)) % 213696ll + 2ll * ((((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)) / 213696ll) * 213696ll)); aten_tanh[(long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)] = tanhf(tv___1); float tv__1 = _ldg(tv + ((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)) % 213696ll + 2ll * ((((long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)) / 213696ll) * 213696ll)); aten_sigmoid[(long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)] = 1.f / (1.f + (expf(0.f - tv__1))); aten_mul[(long long)(threadIdx.x) + 512ll * (long long)(blockIdx.x)] = (tanhf(tv___1)) * (1.f / (1.f + (expf(0.f - tv__1)))); } }

FemBoxbrawl avatar Dec 05 '23 01:12 FemBoxbrawl

Old issue, but for anyone else having trouble with this: try to lower maximum workers with --max-workers That worked at least for me.

Jarauvi avatar Apr 20 '24 08:04 Jarauvi

Old issue, but for anyone else having trouble with this: try to lower maximum workers with --max-workers

That worked at least for me.

Could you please give an example? I'm kind of lost what you mean

LJtrix avatar Apr 20 '24 08:04 LJtrix

Old issue, but for anyone else having trouble with this: try to lower maximum workers with --max-workers That worked at least for me.

Could you please give an example? I'm kind of lost what you mean

python3 -m piper_train.preprocess --language fi --input-dir ~/piper/dataset-pp --output-dir ~/piper/training-pp --dataset-format ljspeech --single-speaker --sample-rate 22050 --max-workers 6

Jarauvi avatar Apr 20 '24 08:04 Jarauvi

Old issue, but for anyone else having trouble with this: try to lower maximum workers with --max-workers

That worked at least for me.

Could you please give an example? I'm kind of lost what you mean

python3 -m piper_train.preprocess --language fi --input-dir ~/piper/dataset-pp --output-dir ~/piper/training-pp --dataset-format ljspeech --single-speaker --sample-rate 22050 --max-workers 6

Thanks. How many minutes of audio do you have? Is it less than 5 mins?

LJtrix avatar Apr 20 '24 08:04 LJtrix

Yes, it seems that I have about 3 minutes of audio in my simple test dataset. I am only going to finetune existing model

Jarauvi avatar Apr 20 '24 08:04 Jarauvi

to anyone looking, I solved my issue. It was stemming from the Batched function wherein it calculates batch size.

The calculation is batch_size = int(num_utterances / (args.max_workers * 2))

So the minimum utterences seems to be 2 i.e. 2 wav files. In my case, i had to add another wav file and update the metadata.csv then I add the line --max-workers 1 to the execution and that fixed it.

Dabsterr avatar May 09 '24 09:05 Dabsterr

Thanks, this helped a lot!

Terrandel avatar Jul 27 '24 23:07 Terrandel