LOBCAST icon indicating copy to clipboard operation
LOBCAST copied to clipboard

_pickle.UnpicklingError: state is not a dictionary

Open hqxmlm opened this issue 8 months ago • 0 comments

When I run "python -m src.main_run_fi", I get the following message:

`Running on server ANY Running FI experiment on Models.MLP, with K=FI_Horizons.K5 Global seed set to 500 0.00s - Debugger warning: It seems that frozen modules are being used, which may 0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off 0.00s - to python to disable frozen modules. 0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation. Create sweep with ID: njx5clfa Sweep URL: https://wandb.ai/quantml/LOB-CLASSIFIERS-%28FI-EXPERIMENTS%29/sweeps/njx5clfa wandb: Agent Starting Run: 82t1jd72 with config: wandb: batch_size: 64 wandb: epochs: 100 wandb: hidden_mlp: 256 wandb: lr: 1e-05 wandb: num_snapshots: 100 wandb: optimizer_name: Adam wandb: p_dropout: 0 0.00s - Debugger warning: It seems that frozen modules are being used, which may 0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off 0.00s - to python to disable frozen modules. 0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation. wandb: WARNING Ignored wandb.init() arg project when running a sweep. Setting model parameters {'batch_size': 64, 'epochs': 100, 'hidden_mlp': 256, 'lr': 1e-05, 'num_snapshots': 100, 'optimizer_name': 'Adam', 'p_dropout': 0} dataset type: DatasetType.TRAIN - normalization: NormalizationType.Z_SCORE

dataset type: DatasetType.VALIDATION - normalization: NormalizationType.Z_SCORE

dataset type: DatasetType.TEST - normalization: NormalizationType.Z_SCORE

GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs Traceback (most recent call last): File "", line 1, in File "/home/huangqixing/miniconda3/envs/LOBCAST/lib/python3.12/multiprocessing/spawn.py", line 122, in spawn_main exitcode = _main(fd, parent_sentinel) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/huangqixing/miniconda3/envs/LOBCAST/lib/python3.12/multiprocessing/spawn.py", line 132, in _main self = reduction.pickle.load(from_parent) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ _pickle.UnpicklingError: state is not a dictionary The following error was raised: Traceback (most recent call last): File "/home/huangqixing/LOBCAST/src/utils/utils_training_loop.py", line 132, in __run_training_loop core(config, model_params) File "/home/huangqixing/LOBCAST/src/utils/utils_training_loop.py", line 118, in core trainer.fit(nn, data_module) File "/home/huangqixing/miniconda3/envs/LOBCAST/lib/python3.12/site-packages/pytorch_lightning/trainer/trainer.py", line 603, in fit call._call_and_handle_interrupt( File "/home/huangqixing/miniconda3/envs/LOBCAST/lib/python3.12/site-packages/pytorch_lightning/trainer/call.py", line 36, in _call_and_handle_interrupt return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/huangqixing/miniconda3/envs/LOBCAST/lib/python3.12/site-packages/pytorch_lightning/strategies/launchers/multiprocessing.py", line 113, in launch mp.start_processes( File "/home/huangqixing/miniconda3/envs/LOBCAST/lib/python3.12/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes while not context.join(): ^^^^^^^^^^^^^^ File "/home/huangqixing/miniconda3/envs/LOBCAST/lib/python3.12/site-packages/torch/multiprocessing/spawn.py", line 148, in join raise ProcessExitedException( torch.multiprocessing.spawn.ProcessExitedException: process 1 terminated with exit code 1 None wandb: Sweep Agent: Waiting for job.`

How to solve it?

hqxmlm avatar Jun 19 '24 08:06 hqxmlm