DESED_task icon indicating copy to clipboard operation
DESED_task copied to clipboard

Problems on "multiprocessing"

Open Hemistic opened this issue 2 years ago • 3 comments

problem when I run the "dcase2022_task4_baseline/train_pretrained.py" Training: 0it [00:00, ?it/s]CODECARBON : No CPU tracking mode found. Falling back on CPU constant mode. CODECARBON : Failed to match CPU TDP constant. Falling back on a global constant. Epoch 0: 0%| | 0/229 [00:00<?, ?it/s] Traceback (most recent call last): File "E:/DESED_task/recipes/dcase2022_task4_baseline/train_pretrained.py", line 436, in single_run( File "E:/DESED_task/recipes/dcase2022_task4_baseline/train_pretrained.py", line 352, in single_run trainer.fit(desed_training) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 735, in fit self._call_and_handle_interrupt( File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 682, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 770, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1193, in _run self._dispatch() File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1272, in _dispatch self.training_type_plugin.start_training(self) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\plugins\training_type\training_type_plugin.py", line 202, in start_training self._results = trainer.run_stage() File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1282, in run_stage return self._run_train() File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1312, in _run_train self.fit_loop.run() File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\loops\base.py", line 145, in run self.advance(*args, **kwargs) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 234, in advance self.epoch_loop.run(data_fetcher) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\loops\base.py", line 140, in run self.on_run_start(*args, **kwargs) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\loops\epoch\training_epoch_loop.py", line 141, in on_run_start self._dataloader_iter = _update_dataloader_iter(data_fetcher, self.batch_idx + 1) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\loops\utilities.py", line 121, in _update_dataloader_iter dataloader_iter = enumerate(data_fetcher, batch_idx) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\utilities\fetching.py", line 198, in iter self._apply_patch() File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\utilities\fetching.py", line 133, in _apply_patch apply_to_collections(self.loaders, self.loader_iters, (Iterator, DataLoader), _apply_patch_fn) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\utilities\fetching.py", line 181, in loader_iters loader_iters = self.dataloader_iter.loader_iters File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 523, in loader_iters self._loader_iters = self.create_loader_iters(self.loaders) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\trainer\supporters.py", line 563, in create_loader_iters return apply_to_collection(loaders, Iterable, iter, wrong_dtype=(Sequence, Mapping)) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\pytorch_lightning\utilities\apply_func.py", line 92, in apply_to_collection return function(data, *args, **kwargs) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\torch\utils\data\dataloader.py", line 367, in iter return self._get_iterator() File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\torch\utils\data\dataloader.py", line 313, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\site-packages\torch\utils\data\dataloader.py", line 926, in init w.start() File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\multiprocessing\popen_spawn_win32.py", line 93, in init reduction.dump(process_obj, to_child) File "C:\Users\Payne\anaconda3\envs\dcase2022_re\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'single_run..ASTFeatsExtraction'

Hemistic avatar May 30 '22 09:05 Hemistic

Did you download the embeddings and set the paths accordingly ? Can you try if the Dataset objects runs correctly without any DataLoader ?

popcornell avatar May 30 '22 18:05 popcornell

Any news on this issue ?

popcornell avatar Jul 08 '22 13:07 popcornell

Sorry, I still haven't solved this problem. I guess it may be caused by inconsistent versions of the operating system or packages. I was originally using win11, it didn't work, and when I used Ubuntu 20, another problem arose. So I gave up and I tried to transfer model in my way. I suggest, if possible, you can write README.md more perfect and add some comments appropriately, thank you.

Hemistic avatar Jul 08 '22 14:07 Hemistic