Vista icon indicating copy to clipboard operation
Vista copied to clipboard

CUDA out of memory on a 40G GPU

Open SunYue98 opened this issue 1 year ago • 16 comments

Following docs/ISSUES.md and docs/SAMPLING.md set, but still out of memory. Here's my config and instruction

In configs/inference/vista.yaml, change en_and_decode_n_samples_a_time to 1

model:
  target: vwm.models.diffusion.DiffusionEngine
  params:
    input_key: img_seq
    scale_factor: 0.18215
    disable_first_stage_autocast: True
    en_and_decode_n_samples_a_time: 1
    num_frames: &num_frames 25

then run sample with

python sample.py  --low_vram

caught by

Traceback (most recent call last):
  File "/gemini/code/Vista/sample.py", line 245, in <module>
    out = do_sample(
  File "/root/miniconda3/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/gemini/code/Vista/sample_utils.py", line 304, in do_sample
    c, uc = get_condition(model, value_dict, num_frames, force_uc_zero_embeddings, device)
  File "/gemini/code/Vista/sample_utils.py", line 262, in get_condition
    c, uc = model.conditioner.get_unconditional_conditioning(
  File "/gemini/code/Vista/vwm/modules/encoders/modules.py", line 175, in get_unconditional_conditioning
    c = self(batch_c, force_cond_zero_embeddings)
  File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/gemini/code/Vista/vwm/modules/encoders/modules.py", line 127, in forward
    emb_out = embedder(batch[embedder.input_key])
  File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/gemini/code/Vista/vwm/modules/encoders/modules.py", line 488, in forward
    out = self.encoder.encode(vid[n * n_samples: (n + 1) * n_samples])
  File "/gemini/code/Vista/vwm/models/autoencoder.py", line 470, in encode
    z = self.encoder(x)
  File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/gemini/code/Vista/vwm/modules/diffusionmodules/model.py", line 540, in forward
    h = self.down[i_level].block[i_block](hs[-1], temb)
  File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/gemini/code/Vista/vwm/modules/diffusionmodules/model.py", line 119, in forward
    h = nonlinearity(h)
  File "/gemini/code/Vista/vwm/modules/diffusionmodules/model.py", line 48, in nonlinearity
    return x * torch.sigmoid(x)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.52 GiB (GPU 0; 39.40 GiB total capacity; 37.00 GiB already allocated; 1.62 GiB free; 37.27 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Segmentation fault (core dumped)

Did I miss some useful setting? Any help would be appreciated.

SunYue98 avatar Jun 16 '24 03:06 SunYue98

I also encountered the same problem, hope to receive help

zhangxiao696 avatar Jun 24 '24 13:06 zhangxiao696

I faced a similar challenge. So, I dont think this a permanent solution, but just to see the working of the methods, I decreased the resolution of generated video to 320 x 576 along with setting en_and_decode_n_samples_a_time: 1

Maybe the authors @zhangxiao696 can provide a better solution. Thanks

shashankvkt avatar Jun 27 '24 15:06 shashankvkt

I faced a similar challenge. So, I dont think this a permanent solution, but just to see the working of the methods, I decreased the resolution of generated video to 320 x 576 along with setting en_and_decode_n_samples_a_time: 1

Maybe the authors @zhangxiao696 can provide a better solution. Thanks

I change num_frames and n_frames to 22 can works. Same as you , it's only a temporary solution

zhangxiao696 avatar Jun 28 '24 04:06 zhangxiao696

@Little-Podi @kashyap7x @YTEP-ZHI, may I ask if you guys have any alternate solution?

shashankvkt avatar Jun 28 '24 07:06 shashankvkt

Sorry, we do not have any other useful tips to provide at the moment. Currently, only --low_vram is able to save the memory without degrading the quality. Other solutions, such as reducing the resolution and the number of frames, are likely to generate inferior results. We will try our best to solve this challenge.

Little-Podi avatar Jun 28 '24 08:06 Little-Podi

There is actually a way to further reduce memory usage without reducing the quality.

The encoder processes all frames in parallel, but this can be changed to process them in sequence instead. After this change the model works fine with less than 20GB memory. Basically replace https://github.com/OpenDriveLab/Vista/blob/cea9cd97af5e0d258d1f6b2ed02c6c164d7f6c02/vwm/modules/encoders/modules.py#L127 with https://github.com/rerun-io/hf-example-vista/blob/381b9d574befe0e9a60e9130980d8da0aec5c6ec/vista/vwm/modules/encoders/modules.py#L129-L134

roym899 avatar Jul 12 '24 18:07 roym899

There is actually a way to further reduce memory usage without reducing the quality.

The encoder processes all frames in parallel, but this can be changed to process them in sequence instead. After this change the model works fine with less than 20GB memory. Basically replace

https://github.com/OpenDriveLab/Vista/blob/cea9cd97af5e0d258d1f6b2ed02c6c164d7f6c02/vwm/modules/encoders/modules.py#L127

with https://github.com/rerun-io/hf-example-vista/blob/381b9d574befe0e9a60e9130980d8da0aec5c6ec/vista/vwm/modules/encoders/modules.py#L129-L134

I attempted your approach with reduced resolution (64*64), num_frames=4, checkpoint and LoRA, but there is still insufficient memory on eight 24G GPUs.

LMD0311 avatar Jul 13 '24 09:07 LMD0311

Maybe try the fork I linked and see if that works. It works fine for me 25 frames, any number of segments, full resolution. You also have to use the low memory mode if you aren't yet.

roym899 avatar Jul 13 '24 09:07 roym899

Maybe try the fork I linked and see if that works. It works fine for me 25 frames, any number of segments, full resolution. You also have to use the low memory mode if you aren't yet.

Thanks for replying!

LMD0311 avatar Jul 13 '24 09:07 LMD0311

There is actually a way to further reduce memory usage without reducing the quality.

The encoder processes all frames in parallel, but this can be changed to process them in sequence instead. After this change the model works fine with less than 20GB memory. Basically replace

https://github.com/OpenDriveLab/Vista/blob/cea9cd97af5e0d258d1f6b2ed02c6c164d7f6c02/vwm/modules/encoders/modules.py#L127

with https://github.com/rerun-io/hf-example-vista/blob/381b9d574befe0e9a60e9130980d8da0aec5c6ec/vista/vwm/modules/encoders/modules.py#L129-L134

solve my problem !!!

wangsdchn avatar Aug 12 '24 12:08 wangsdchn

There is actually a way to further reduce memory usage without reducing the quality.

The encoder processes all frames in parallel, but this can be changed to process them in sequence instead. After this change the model works fine with less than 20GB memory. Basically replace

https://github.com/OpenDriveLab/Vista/blob/cea9cd97af5e0d258d1f6b2ed02c6c164d7f6c02/vwm/modules/encoders/modules.py#L127

with https://github.com/rerun-io/hf-example-vista/blob/381b9d574befe0e9a60e9130980d8da0aec5c6ec/vista/vwm/modules/encoders/modules.py#L129-L134

Thank you for your sharing, I can successfully use 40G GPU memory for reasoning, but can not train, even if the resolution is 320*576, may I ask how you successfully test training in small GPU memory ?

TianDianXin avatar Nov 13 '24 07:11 TianDianXin

@TianDianXin Seems at this time, we can only switch A100 40G to A100 80G😂

SEU-zxj avatar Dec 06 '24 05:12 SEU-zxj

Hello, everyone! I have a question. Does the method proposed by @roym899 will affect the model's results? I am worried about that......

there maybe some batch normalization operations in the encoder, and encode the batch in parallel will produce different results compared with encode the batch sequentially and then concat the results......

@Little-Podi need your help😖

SEU-zxj avatar Dec 06 '24 06:12 SEU-zxj

Hi @SEU-zxj, I think you can do that modification confidently. There are no batchnorm included in the model, thereby encoding the batch sequentially will NOT hurt the performance.

YTEP-ZHI avatar Dec 09 '24 13:12 YTEP-ZHI

OK, Thanks for your reply! @YTEP-ZHI

SEU-zxj avatar Dec 10 '24 12:12 SEU-zxj

how to change the resolution ? i tried to change it but it will cause some error: data: target: vwm.data.dataset.Sampler params: batch_size: 1 num_workers: 16 subsets: - NuScenes probs: - 1 samples_per_epoch: 16000 target_height: 64 target_width: 64 num_frames: 25

Lightning config callbacks: image_logger: target: train.ImageLogger params: num_frames: 25 disabled: false enable_autocast: false batch_frequency: 100 increase_log_steps: true log_first_step: false log_images_kwargs: 'N': 25 modelcheckpoint: params: every_n_epochs: 1 trainer: devices: 0, benchmark: true num_sanity_val_steps: 0 accumulate_grad_batches: 1 max_epochs: 100 strategy: deepspeed_stage_2 gradient_clip_val: 0.3 accelerator: gpu num_nodes: '1'

| Name | Type | Params

0 | model | OpenAIWrapper | 1.6 B 1 | denoiser | Denoiser | 0
2 | conditioner | GeneralConditioner | 767 M 3 | first_stage_model | AutoencodingEngine | 97.7 M 4 | loss_fn | StandardDiffusionLoss | 0
5 | model_ema | LitEma | 0

1.6 B Trainable params 865 M Non-trainable params 2.5 B Total params 10,053.103Total estimated model params size (MB) Epoch 0: 0%| | 0/16000 [00:00<?, ?it/s]Exiting Traceback (most recent call last): File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 42, in _call_and_handle_interrupt return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 92, in launch return function(*args, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 559, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 935, in _run results = self._run_stage() File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 978, in _run_stage self.fit_loop.run() File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 201, in run self.advance() File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 354, in advance self.epoch_loop.run(self._data_fetcher) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 133, in run self.advance(data_fetcher) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 218, in advance batch_output = self.automatic_optimization.run(trainer.optimizers[0], kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 185, in run self._optimizer_step(kwargs.get("batch_idx", 0), closure) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 261, in _optimizer_step call._call_lightning_module_hook( File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 142, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/core/module.py", line 1265, in optimizer_step optimizer.step(closure=optimizer_closure) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 158, in step step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 257, in optimizer_step optimizer_output = super().optimizer_step(optimizer, closure, model, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 224, in optimizer_step return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/deepspeed.py", line 92, in optimizer_step closure_result = closure() File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 140, in call self._result = self.closure(*args, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 126, in closure step_output = self._step_fn() File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 308, in _training_step training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values()) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 288, in _call_strategy_hook output = fn(*args, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 329, in training_step return self.model(*args, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn ret_val = func(*args, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 1914, in forward loss = self.module(*inputs, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/overrides/base.py", line 90, in forward output = self._forward_module.training_step(*inputs, **kwargs) File "/home/ma-user/work/g84397891/Vista-main/vwm/models/diffusion.py", line 211, in training_step loss, loss_dict = self.shared_step(batch) File "/home/ma-user/work/g84397891/Vista-main/vwm/models/diffusion.py", line 207, in shared_step loss, loss_dict = self(x, batch) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/ma-user/work/g84397891/Vista-main/vwm/models/diffusion.py", line 198, in forward loss = self.loss_fn(self.model, self.denoiser, self.conditioner, x, batch) # go to StandardDiffusionLoss File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/loss.py", line 60, in forward return self._forward(network, denoiser, cond, input) File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/loss.py", line 93, in _forward model_output = denoiser(network, noised_input, sigmas, cond, cond_mask) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/denoiser.py", line 35, in forward return (network(noised_input * c_in, c_noise, cond, cond_mask, self.num_frames) * c_out + noised_input * c_skip) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/wrappers.py", line 32, in forward return self.diffusion_model( File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/video_model.py", line 494, in forward h = module( File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/openaimodel.py", line 44, in forward x = layer(x, emb, num_frames) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/video_model.py", line 65, in forward x = super().forward(x, emb) File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/openaimodel.py", line 254, in forward return checkpoint(self._forward, x, emb) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 249, in checkpoint return CheckpointFunction.apply(function, preserve, *args) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/autograd/function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 107, in forward outputs = run_function(*args) File "/home/ma-user/work/g84397891/Vista-main/vwm/modules/diffusionmodules/openaimodel.py", line 284, in _forward return self.skip_connection(x) + h File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: CUDA error: misaligned address Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/ma-user/work/g84397891/Vista-main/train.py", line 916, in raise error File "/home/ma-user/work/g84397891/Vista-main/train.py", line 896, in trainer.fit(model, data, ckpt_path=ckpt_resume_path) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 520, in fit call._call_and_handle_interrupt( File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 68, in _call_and_handle_interrupt trainer._teardown() File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 958, in _teardown self.strategy.teardown() File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 430, in teardown super().teardown() File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/parallel.py", line 125, in teardown super().teardown() File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 475, in teardown self.lightning_module.cpu() File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 78, in cpu return super().cpu() File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 954, in cpu return self._apply(lambda t: t.cpu()) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) [Previous line repeated 1 more time] File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 820, in _apply param_applied = fn(param) File "/home/ma-user/anaconda3/envs/PyTorch-2.0.0/lib/python3.9/site-packages/torch/nn/modules/module.py", line 954, in return self._apply(lambda t: t.cpu()) RuntimeError: CUDA error: misaligned address Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

BlackTea-c avatar Jan 20 '25 08:01 BlackTea-c