litgpt
litgpt copied to clipboard
Multiple GPU training stuck on Loading extension module utils / Tijme to load utils op XXX seconds
When starting the training using multiple GPU, it gets stuck indefinitely (I let it stuck for 10 minutes then I closed it) on the following log. Runpod.io tells me that the GPU have 34% VRAM used and 100% GPU Usage :
With 2 gpu:
Loading extension module utils...
Time to load utils op: 11.247538805007935 seconds
Loading extension module utils...
Time to load utils op: 11.123851776123047 seconds
With 4 it looks like this:
Loading extension module utils...
Time to load utils op: 11.247538805007935 seconds
Loading extension module utils...
Time to load utils op: 11.123851776123047 seconds
Loading extension module utils...
Time to load utils op: 11.247538805007935 seconds
Loading extension module utils...
Time to load utils op: 11.123851776123047 seconds
Here is the stack trace :
root@492f6f0b65c4:/workspace/Lit-Parrot# ./start_training.sh
[2023-06-19 15:41:32,009] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
{'eval_interval': 8, 'save_interval': 20, 'eval_iters': 100, 'log_interval': 1, 'devices': 2, 'learning_rate': 0.0003, 'batch_size': 128, 'micro_batch_size': 2, 'gradient_accumulation_iters': 64, 'num_epochs': 60, 'max_iters': 31320, 'weight_decay': 0.01, 'lora_r': 8, 'lora_alpha': 16, 'lora_dropout': 0.05, 'warmup_iters': 100}
initializing deepspeed distributed: GLOBAL_RANK: 0, MEMBER: 1/2
[2023-06-19 15:41:32,209] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2023-06-19 15:41:35,207] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
{'eval_interval': 8, 'save_interval': 20, 'eval_iters': 100, 'log_interval': 1, 'devices': 2, 'learning_rate': 0.0003, 'batch_size': 128, 'micro_batch_size': 2, 'gradient_accumulation_iters': 64, 'num_epochs': 60, 'max_iters': 31320, 'weight_decay': 0.01, 'lora_r': 8, 'lora_alpha': 16, 'lora_dropout': 0.05, 'warmup_iters': 100}
initializing deepspeed distributed: GLOBAL_RANK: 1, MEMBER: 2/2
[2023-06-19 15:41:35,376] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
Enabling DeepSpeed BF16.
[rank: 1] Global seed set to 1338
/usr/local/lib/python3.10/dist-packages/lightning/fabric/loggers/csv_logs.py:186: UserWarning: Experiment logs directory trained-model/falcon-7b-dsul_1/version_0 exists and is not empty. Previous log files in this directory will be deleted when the new ones are saved!
rank_zero_warn(
[rank: 0] Global seed set to 1337
Loading model 'checkpoints/tiiuae/falcon-7b/lit_model.pth' with {'block_size': 2048, 'vocab_size': 50254, 'padding_multiple': 512, 'padded_vocab_size': 65024, 'n_layer': 32, 'n_head': 71, 'n_embd': 4544, 'rotary_percentage': 1.0, 'parallel_residual': True, 'bias': False, 'n_query_groups': 1, 'shared_attention_norm': True}
^[[CNumber of trainable parameters: 3506176
Using /root/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...
Creating extension directory /root/.cache/torch_extensions/py310_cu118/utils...
Emitting ninja build file /root/.cache/torch_extensions/py310_cu118/utils/build.ninja...
Building extension module utils...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
Using /root/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...
[1/2] c++ -MMD -MF flatten_unflatten.o.d -DTORCH_EXTENSION_NAME=utils -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /usr/local/lib/python3.10/dist-packages/torch/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.10/dist-packages/torch/include/TH -isystem /usr/local/lib/python3.10/dist-packages/torch/include/THC -isystem /usr/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -c /usr/local/lib/python3.10/dist-packages/deepspeed/ops/csrc/utils/flatten_unflatten.cpp -o flatten_unflatten.o
[2/2] c++ flatten_unflatten.o -shared -L/usr/local/lib/python3.10/dist-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o utils.so
Loading extension module utils...
Time to load utils op: 11.247538805007935 seconds
Loading extension module utils...
Time to load utils op: 11.123851776123047 seconds
I don't have this error using only one GPU.
This seems to be a DeepSpeed issue, so it might disappear with #118. Which script are you running?
I would suggest trying out a DeepSpeed example on your setup to make sure it's an issue with this repository. You can find DeepSpeed examples here: https://github.com/microsoft/DeepSpeedExamples/tree/master/training/cifar
I had a similar hanging issue and all it took was deleting the cache directory that should look something like
Using /home/griffina/.cache/torch_extensions/py311_cu118 as PyTorch extensions root...