neuralBlack
neuralBlack copied to clipboard
OSError: [Errno 22] Invalid argument
When I run the training loop, I get the following error:
_---------------------------------------------------------------------------
OSError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py in iter(self) 499 500 def iter(self): --> 501 return _DataLoaderIter(self) 502 503 def len(self):
~\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py in init(self, loader) 287 for w in self.workers: 288 w.daemon = True # ensure that the worker exits on process exit --> 289 w.start() 290 291 _update_worker_pids(id(self), tuple(w.pid for w in self.workers))
~\Anaconda3\lib\multiprocessing\process.py in start(self) 103 'daemonic processes are not allowed to have children' 104 _cleanup() --> 105 self._popen = self._Popen(self) 106 self._sentinel = self._popen.sentinel 107 # Avoid a refcycle if the target function holds an indirect
~\Anaconda3\lib\multiprocessing\context.py in _Popen(process_obj) 221 @staticmethod 222 def _Popen(process_obj): --> 223 return _default_context.get_context().Process._Popen(process_obj) 224 225 class DefaultContext(BaseContext):
~\Anaconda3\lib\multiprocessing\context.py in _Popen(process_obj) 320 def _Popen(process_obj): 321 from .popen_spawn_win32 import Popen --> 322 return Popen(process_obj) 323 324 class SpawnContext(BaseContext):
~\Anaconda3\lib\multiprocessing\popen_spawn_win32.py in init(self, process_obj) 63 try: 64 reduction.dump(prep_data, to_child) ---> 65 reduction.dump(process_obj, to_child) 66 finally: 67 set_spawning_popen(None)
~\Anaconda3\lib\multiprocessing\reduction.py in dump(obj, file, protocol) 58 def dump(obj, file, protocol=None): 59 '''Replacement for pickle.dump() using ForkingPickler.''' ---> 60 ForkingPickler(file, protocol).dump(obj) 61 62 #
OSError: [Errno 22] Invalid argument
I assumed it could be because of the large size of the pickle file so I changed the loading command to the code in this link:https://www.programmersought.com/article/3832726678/
but yet I am still getting the same error..
Did anyone solved that error?