stable-diffusion icon indicating copy to clipboard operation
stable-diffusion copied to clipboard

AttributeError: Can't pickle local object 'hf_dataset.<locals>.pre_process'

Open ghost opened this issue 2 years ago • 2 comments

I get these errors while trying to train. I'm working on Windows 11. Does anyone have any idea what I can do to fix it?

Traceback (most recent call last): File "", line 1, in File "C:\Users\aiaia\miniconda3\envs\ldm\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\multiprocessing\spawn.py", line 126, in _main self = reduction.pickle.load(from_parent) EOFError: Ran out of input

Traceback (most recent call last): File "main.py", line 903, in trainer.fit(model, data) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 553, in fit self._run(model) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 918, in _run self._dispatch() File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 986, in _dispatch self.accelerator.start_training(self) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\accelerators\accelerator.py", line 92, in start_training self.training_type_plugin.start_training(trainer) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\plugins\training_type\training_type_plugin.py", line 161, in start_training self._results = trainer.run_stage() File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 996, in run_stage return self._run_train() File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1045, in _run_train self.fit_loop.run() File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\loops\base.py", line 111, in run self.advance(*args, **kwargs) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 200, in advance epoch_output = self.epoch_loop.run(train_dataloader) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\loops\base.py", line 111, in run self.advance(*args, **kwargs) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\loops\epoch\training_epoch_loop.py", line 118, in advance _, (batch, is_last) = next(dataloader_iter) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\profiler\base.py", line 104, in profile_iterable value = next(iterator) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 625, in prefetch_iterator last = next(it) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 546, in next return self.request_next_batch(self.loader_iters) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 532, in loader_iters self._loader_iters = self.create_loader_iters(self.loaders) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 590, in create_loader_iters return apply_to_collection(loaders, Iterable, iter, wrong_dtype=(Sequence, Mapping)) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\pytorch_lightning\utilities\apply_func.py", line 96, in apply_to_collection return function(data, *args, **kwargs) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\torch\utils\data\dataloader.py", line 444, in iter return self._get_iterator() File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\torch\utils\data\dataloader.py", line 390, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\site-packages\torch\utils\data\dataloader.py", line 1077, in init w.start() File "C:\Users\aiaia\miniconda3\envs\ldm\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\multiprocessing\popen_spawn_win32.py", line 93, in init reduction.dump(process_obj, to_child) File "C:\Users\aiaia\miniconda3\envs\ldm\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'hf_dataset..pre_process'

ghost avatar Oct 25 '22 15:10 ghost

hello, i have also met this problem. Do you solve it now?

xuzekai1997 avatar Mar 22 '23 03:03 xuzekai1997

hello, i have also met this problem. Do you solve it now?

On windows system, set num_workers to 0 in the yaml file.

chiulun avatar Apr 03 '23 01:04 chiulun