ParlAI icon indicating copy to clipboard operation
ParlAI copied to clipboard

BB3 3B Runtime OOM Error on 40Gb GPU

Open fraferra opened this issue 2 years ago • 9 comments

Bug description Training on the 3B BB3 model fails due to OOM

Reproduction steps

  • CUDA 11.6
  • NVIDIA A100 (40GiB)

Training command: python3.8 -m parlai.scripts.multiprocessing_train -t custom_persona,msc --multitask-weights 8,2 -vstep 1000 -lstep 50 --batchsize 1 --init-opt arch/r2c2_base_3B --init-model zoo:seeker/r2c2_blenderbot_3B/model --model projects.seeker.agents.seeker:ComboFidGoldDocumentAgent --n-docs 5 --text-truncate 512 --label-truncate 64 --truncate 512 --fp16 True -lr 1e-06 --lr-scheduler reduceonplateau --lr-scheduler-patience 3 --optimizer adamw --save-after-valid True --warmup-updates 100 --update-freq 1 --gradient-clip 1.0 --skip-generation True --dropout 0.1 --attention-dropout 0.0 -vp 10 -vmt ppl -vmm min -vme 100000 --load-from-checkpoint true --ddp-backend zero2 --checkpoint-activations true --model-file ParlAI/data/models/v2.0.0

Expected behavior I have trained a BB2 3B before on a 16GiB so I was very surprised to see BB3 3B to OOM on a 40GiB GPU. I was expecting to train with no difficulties. Notice that I have already reduced the batchsize to 1 and I halved max-sequence

Logs

Downloading msc_v0.1.tar.gz: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 51.1M/51.1M [00:02<00:00, 18.5MB/s]
loading: /ParlAI/data/msc/msc/msc_dialogue/session_2
loading: /ParlAI/data/msc/msc/msc_dialogue/session_3
loading: /ParlAI/data/msc/msc/msc_dialogue/session_4
00:41:01 | training...
00:41:02 | Ran out of memory, skipping batch. if this happens frequently, decrease batchsize or truncate the inputs to the model.
00:41:02 | Traceback (most recent call last):
  File "/ParlAI/parlai/core/torch_generator_agent.py", line 620, in _fake_forward_backward_pass
    loss = 0 * self.compute_loss(self._dummy_batch)
  File "/ParlAI/parlai/agents/rag/rag.py", line 916, in compute_loss
    model_output = self.get_model_output(batch)
  File "/ParlAI/parlai/agents/rag/rag.py", line 888, in get_model_output
    model_output = self.model(
  File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1414, in forward
    self._rebuild_full_params()
  File "/usr/local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1946, in _rebuild_full_params
    self._cast_fp32_param_shards_to_fp16()
  File "/usr/local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 2225, in _cast_fp32_param_shards_to_fp16
    alloc_storage_(p._fp16_shard, size=p._fp32_shard.size())
  File "/usr/local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 2518, in alloc_storage_
    data.storage().resize_(size.numel())
RuntimeError: CUDA out of memory. Tried to allocate 206.00 MiB (GPU 0; 39.41 GiB total capacity; 37.12 GiB already allocated; 48.50 MiB free; 37.53 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/ParlAI/parlai/scripts/multiprocessing_train.py", line 45, in multiprocess_train
    return single_train.TrainLoop(opt).train()
  File "/ParlAI/parlai/scripts/train_model.py", line 1007, in train
    for _train_log in self.train_steps():
  File "/ParlAI/parlai/scripts/train_model.py", line 914, in train_steps
    world.parley()
  File "/ParlAI/parlai/core/worlds.py", line 700, in parley
    self.worlds[self.world_idx].parley()
  File "/ParlAI/parlai/core/worlds.py", line 370, in parley
    acts[1] = agents[1].act()
  File "/ParlAI/parlai/core/torch_agent.py", line 2148, in act
    response = self.batch_act([self.observation])[0]
  File "/ParlAI/parlai/agents/fid/fid.py", line 389, in batch_act
    batch_reply = super().batch_act(observations)
  File "/ParlAI/parlai/core/torch_agent.py", line 2239, in batch_act
    output = self.train_step(batch)
  File "/ParlAI/parlai/core/torch_generator_agent.py", line 776, in train_step
    self._fake_forward_backward_pass()
  File "/ParlAI/parlai/core/torch_generator_agent.py", line 631, in _fake_forward_backward_pass
    raise RuntimeError(m)
RuntimeError: CUDA OOM: Lower batch size (-bs) from 1 or lower  max sequence length (-tr) from 512

00:41:02 | Got the above exception on worker 0. This may cause hangs requiring manual killing of processes.
Traceback (most recent call last):
  File "/ParlAI/parlai/core/torch_generator_agent.py", line 620, in _fake_forward_backward_pass
    loss = 0 * self.compute_loss(self._dummy_batch)
  File "/ParlAI/parlai/agents/rag/rag.py", line 916, in compute_loss
    model_output = self.get_model_output(batch)
  File "/ParlAI/parlai/agents/rag/rag.py", line 888, in get_model_output
    model_output = self.model(
  File "/usr/local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1414, in forward
    self._rebuild_full_params()
  File "/usr/local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1946, in _rebuild_full_params
    self._cast_fp32_param_shards_to_fp16()
  File "/usr/local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 2225, in _cast_fp32_param_shards_to_fp16
    alloc_storage_(p._fp16_shard, size=p._fp32_shard.size())
  File "/usr/local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 2518, in alloc_storage_
    data.storage().resize_(size.numel())
RuntimeError: CUDA out of memory. Tried to allocate 206.00 MiB (GPU 0; 39.41 GiB total capacity; 37.12 GiB already allocated; 48.50 MiB free; 37.53 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/local/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/ParlAI/parlai/scripts/multiprocessing_train.py", line 110, in <module>
    MultiProcessTrain.main()
  File "/ParlAI/parlai/core/script.py", line 129, in main
    return cls._run_args(None)
  File "/ParlAI/parlai/core/script.py", line 101, in _run_args
    return cls._run_from_parser_and_opt(opt, parser)
  File "/ParlAI/parlai/core/script.py", line 108, in _run_from_parser_and_opt
    return script.run()
  File "/ParlAI/parlai/scripts/multiprocessing_train.py", line 106, in run
    return launch_and_train(self.opt, port)
  File "/ParlAI/parlai/scripts/multiprocessing_train.py", line 75, in launch_and_train
    retval = multiprocess_train(0, opt, port)
  File "/ParlAI/parlai/scripts/multiprocessing_train.py", line 45, in multiprocess_train
    return single_train.TrainLoop(opt).train()
  File "/ParlAI/parlai/scripts/train_model.py", line 1007, in train
    for _train_log in self.train_steps():
  File "/ParlAI/parlai/scripts/train_model.py", line 914, in train_steps
    world.parley()
  File "/ParlAI/parlai/core/worlds.py", line 700, in parley
    self.worlds[self.world_idx].parley()
  File "/ParlAI/parlai/core/worlds.py", line 370, in parley
    acts[1] = agents[1].act()
  File "/ParlAI/parlai/core/torch_agent.py", line 2148, in act
    response = self.batch_act([self.observation])[0]
  File "/ParlAI/parlai/agents/fid/fid.py", line 389, in batch_act
    batch_reply = super().batch_act(observations)
  File "/ParlAI/parlai/core/torch_agent.py", line 2239, in batch_act
    output = self.train_step(batch)
  File "/ParlAI/parlai/core/torch_generator_agent.py", line 776, in train_step
    self._fake_forward_backward_pass()
  File "/ParlAI/parlai/core/torch_generator_agent.py", line 631, in _fake_forward_backward_pass
    raise RuntimeError(m)
RuntimeError: CUDA OOM: Lower batch size (-bs) from 1 or lower  max sequence length (-tr) from 512
Output goes here

Additional context Add any other context about the problem here. (like proxy settings, network setup, overall goals, etc.)

fraferra avatar Aug 18 '22 00:08 fraferra

The BB3 3B model is a bit more memory intensive than BB2. In my local experiments I can get it to train on 2 x 32gb GPUs, with --text-truncate 128 --truncate 128

klshuster avatar Aug 18 '22 17:08 klshuster

Thank you for the reply! Will it need this much memory also for inference or is the 64Gb memory requirement only needed for training?

fraferra avatar Aug 18 '22 18:08 fraferra

Only for training. Inference could easily be run on 1 40gb node (I can run it on 1 16gpb node)

klshuster avatar Aug 18 '22 19:08 klshuster

I have spin a new instance with 2 A100 for a total of 80Gb but the problem still seems to persist. Strangely it doesnt seem to be taking all the memory (see below), however it still fails on OOM. It's currently running on Docker.

image
17:39:32 | Using CUDA
17:39:32 | loading dictionary from /ParlAI/data/models/seeker/r2c2_blenderbot_3B/model.dict
17:39:33 | num words = 50264
17:40:15 | Total parameters: 1,346,138,112 (1,346,138,112 trainable)
17:40:15 | Loading existing model params from /ParlAI/data/models/seeker/r2c2_blenderbot_3B/model
rank:  1 | 17:40:20 | Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1112, in summon_full_params
    stack.enter_context(module.summon_full_params(recurse=False, volatile=volatile))
  File "/usr/local/lib/python3.8/contextlib.py", line 425, in enter_context
    result = _cm_type.__enter__(cm)
  File "/usr/local/lib/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1119, in summon_full_params
    self._lazy_init()
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1188, in _lazy_init
    self._init_param_attributes(p)
  File "/usr/local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1289, in _init_param_attributes
    p._full_param_padded = torch.zeros(
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 130.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1137, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/ParlAI/parlai/agents/rag/rag.py", line 486, in load_state_dict
    self.model.load_state_dict(state_dict)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1020, in load_state_dict
    sd = self._load_state_dict(state_dict, strict)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1009, in _load_state_dict
    with self.summon_full_params():
  File "/usr/local/lib/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1114, in summon_full_params
    yield
  File "/usr/local/lib/python3.8/contextlib.py", line 525, in __exit__
    raise exc_details[1]
  File "/usr/local/lib/python3.8/contextlib.py", line 510, in __exit__
    if cb(*exc_details):
  File "/usr/local/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1151, in summon_full_params
    local_shard, _ = self._get_shard(full_tensor)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 790, in _get_shard
    shard = chunks[self.rank].clone()
RuntimeError: CUDA out of memory. Tried to allocate 206.00 MiB (GPU 1; 39.41 GiB total capacity; 14.54 GiB already allocated; 85.56 MiB free; 14.71 GiB reserved in total by PyTorch)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/ParlAI/parlai/scripts/multiprocessing_train.py", line 45, in multiprocess_train
    return single_train.TrainLoop(opt).train()
  File "/ParlAI/parlai/scripts/train_model.py", line 378, in __init__
    self.agent = create_agent(opt)
  File "/ParlAI/parlai/core/agents.py", line 479, in create_agent
    model = model_class(opt)
  File "/ParlAI/parlai/agents/fid/fid.py", line 358, in __init__
    super().__init__(opt, shared=shared)
  File "/ParlAI/parlai/agents/rag/rag.py", line 186, in __init__
    self._generation_agent.__init__(self, opt, shared)  # type: ignore
  File "/ParlAI/parlai/agents/bart/bart.py", line 72, in __init__
    super().__init__(opt, shared)
  File "/ParlAI/parlai/core/torch_generator_agent.py", line 533, in __init__
    states = self.load(init_model)
  File "/ParlAI/parlai/core/torch_agent.py", line 2078, in load
    self.load_state_dict(states['model'])
  File "/ParlAI/parlai/agents/rag/rag.py", line 503, in load_state_dict
    self.model.load_state_dict(state_dict)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1020, in load_state_dict
    sd = self._load_state_dict(state_dict, strict)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1009, in _load_state_dict
    with self.summon_full_params():
  File "/usr/local/lib/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1112, in summon_full_params
    stack.enter_context(module.summon_full_params(recurse=False, volatile=volatile))
  File "/usr/local/lib/python3.8/contextlib.py", line 425, in enter_context
    result = _cm_type.__enter__(cm)
  File "/usr/local/lib/python3.8/contextlib.py", line 113, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 1120, in summon_full_params
    self.assert_state(TrainingState.IDLE)
  File "/usr/local/lib/python3.8/site-packages/fairscale/nn/data_parallel/fully_sharded_data_parallel.py", line 2265, in assert_state
    raise ValueError(msg)
ValueError: expected to be in states [<TrainingState.IDLE: 1>] but current state is TrainingState.SUMMON_FULL_PARAMS

rank:  1 | 17:40:20 | Got the above exception on worker 1. This may cause hangs requiring manual killing of processes.

fraferra avatar Aug 19 '22 17:08 fraferra

That's... weird. You used the exact same command as above?

klshuster avatar Aug 19 '22 20:08 klshuster

Yep, same exact command as above with the same params' values.

fraferra avatar Aug 22 '22 14:08 fraferra

Looks like there's some issues with sharding on the GPUs? I was able to run your exact command with 2 x 32gb v100s, not sure what's going on here. Note that I am not using docker so that could be another possibility?

klshuster avatar Aug 22 '22 16:08 klshuster

This issue has not had activity in 30 days. Please feel free to reopen if you have more issues. You may apply the "never-stale" tag to prevent this from happening.

github-actions[bot] avatar Sep 22 '22 00:09 github-actions[bot]

Are we sure we are using FSDP here?

I'm with Kurt. This is straight weird.

stephenroller avatar Sep 22 '22 00:09 stephenroller