probnmn-clevr icon indicating copy to clipboard operation
probnmn-clevr copied to clipboard

Multi gpu raise error

Open sjtuytc opened this issue 4 years ago • 0 comments

Hi, in the second training phase, it's fine to use only one gpu, but when using multiple gpus it would run into problems. The layout is as follows. How can I handle this?

2020-11-16 02:03:30.277 | INFO | probnmn.utils.checkpointing:load:156 - Checkpointables not found in file: [] 2020-11-16 02:03:30.335 | INFO | probnmn.utils.checkpointing:load:131 - Loading checkpoint from checkpoints/question_coding_ours/checkpoint_best.pth 2020-11-16 02:03:30.367 | INFO | probnmn.utils.checkpointing:load:153 - optimizer not found in checkpointables. 2020-11-16 02:03:30.368 | INFO | probnmn.utils.checkpointing:load:153 - scheduler not found in checkpointables. 2020-11-16 02:03:30.368 | INFO | probnmn.utils.checkpointing:load:141 - Loading program_generator from checkpoints/question_coding_ours/checkpoint_best.pth 2020-11-16 02:03:30.371 | INFO | probnmn.utils.checkpointing:load:153 - question_reconstructor not found in checkpointables. 2020-11-16 02:03:30.371 | INFO | probnmn.utils.checkpointing:load:156 - Checkpointables not found in file: [] training: 0%| | 0/80000 [00:11<?, ?it/s] Traceback (most recent call last): File "scripts/train.py", line 136, in <module> trainer.step(iteration) File "/localscratch/zelin/batch_soft_reason/baselines/probnmn-clevr/probnmn/trainers/_trainer.py", line 148, in step output_dict = self._do_iteration(batch) File "/localscratch/zelin/batch_soft_reason/baselines/probnmn-clevr/probnmn/trainers/module_training_trainer.py", line 90, in _do_iteration output_dict = self._nmn(batch["image"], pg_output_dict["predictions"], batch["answer"]) File "/localscratch/ksamel3/anaconda3/envs/soft_reason/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/localscratch/ksamel3/anaconda3/envs/soft_reason/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward return self.gather(outputs, self.output_device) File "/localscratch/ksamel3/anaconda3/envs/soft_reason/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 165, in gather return gather(outputs, output_device, dim=self.dim) File "/localscratch/ksamel3/anaconda3/envs/soft_reason/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/localscratch/ksamel3/anaconda3/envs/soft_reason/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in gather_map for k in out)) File "/localscratch/ksamel3/anaconda3/envs/soft_reason/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in <genexpr> for k in out)) File "/localscratch/ksamel3/anaconda3/envs/soft_reason/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in gather_map for k in out)) File "/localscratch/ksamel3/anaconda3/envs/soft_reason/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in <genexpr> for k in out)) File "/localscratch/ksamel3/anaconda3/envs/soft_reason/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map return type(out)(map(gather_map, zip(*outputs))) TypeError: zip argument #1 must support iteration

sjtuytc avatar Nov 16 '20 12:11 sjtuytc