DeepSpeed
DeepSpeed copied to clipboard
[BUG] Error `raise RuntimeError(f"still have inflight params "` when doing IDEFICS inference
Describe the bug Hi, I am having a bug with traceback
Invalidate trace cache @ step 2: expected module 4, but got module 2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/transformers/generation/utils.py", line 1602, in generate
return self.greedy_search(
File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/transformers/generation/utils.py", line 2450, in greedy_search
outputs = self(
File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1547, in _call_impl
hook_result = hook(self, args, result)
File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
ret_val = func(*args, **kwargs)
File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 350, in _end_of_forward_hook
self.get_param_coordinator(training=False).reset_step()
File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 203, in reset_step
raise RuntimeError(f"still have inflight params "
RuntimeError: still have inflight params [{'id': 4, 'status': 'AVAILABLE', 'numel': 32, 'ds_numel': 32, 'shape': (32,), 'ds_shape': (32,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([32])}, {'id': 17, 'status': 'AVAILABLE', 'numel': 1184, 'ds_numel': 1184, 'shape': (37, 32), 'ds_shape': (37, 32), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([1184])}, {'id': 47, 'status': 'AVAILABLE', 'numel': 32, 'ds_numel': 32, 'shape': (32,), 'ds_shape': (32,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([32])}, {'id': 64, 'status': 'AVAILABLE', 'numel': 32, 'ds_numel': 32, 'shape': (32,), ...
when trying to do inference with the model IDEFICS with Zero 3 (also reproducible with a tiny model).
To Reproduce
Please, first install transformers
version transformers-4.33.2
, and then make the following change:
in transformers/src/transformers/models/idefics/modeling_idefics.py
, at line 431, replace additional_features = F.linear(input, self.additional_fc.weight, self.additional_fc.bias)
with additional_features = self.additional_fc(input)
(otherwise there is another bug).
I have last DeepSpeed version v0.10.3.
Now, the following script gives the error
import deepspeed
import torch
from transformers import AutoProcessor, IdeficsForVisionText2Text
from transformers.integrations import HfDeepSpeedConfig
checkpoint = "HuggingFaceM4/tiny-random-idefics"
device = "cuda" if torch.cuda.is_available() else "cpu"
# Define the config before the model
ds_config = {
"communication_data_type": "fp32",
"bf16": {"enabled": True},
"zero_optimization": {
"stage": 3,
"overlap_comm": False,
"reduce_bucket_size": "auto",
"contiguous_gradients": True,
"stage3_gather_16bit_weights_on_model_save": False,
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 2e9,
"stage3_max_reuse_distance": 2e9,
"offload_optimizer": {"device": "none"},
"offload_param": {"device": "none"},
},
"gradient_clipping": "auto",
"train_batch_size": 32,
"steps_per_print": 2000000,
}
dschf = HfDeepSpeedConfig(ds_config)
model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16)
processor = AutoProcessor.from_pretrained(checkpoint)
engine = deepspeed.initialize(model=model, config_params=ds_config)
prompts = [
[
"User: What is in this image?",
"https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG",
"<end_of_utterance>",
(
"\nAssistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on"
" the ground.<end_of_utterance>"
),
"\nUser:",
"https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052",
"And who is that?<end_of_utterance>",
"\nAssistant:",
],
]
inputs = processor(prompts, add_end_of_utterance_token=False, return_tensors="pt").to(device)
exit_condition = processor.tokenizer("<end_of_utterance>", add_special_tokens=False).input_ids
bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids
generated_ids = model.generate(**inputs, eos_token_id=exit_condition, bad_words_ids=bad_words_ids, max_length=100)
Expected behavior
I would expect the function generate
to return the new tokens.
ds_report output
[2023-09-26 23:14:19,214] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/accelerate/utils/imports.py:197: UserWarning: `ACCELERATE_DISABLE_RICH` is deprecated and will be removed in v0.22.0 and deactivated by default. Please use `ACCELERATE_ENABLE_RICH` if you wish to use `rich`.
warnings.warn(
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
[WARNING] async_io requires the dev libaio .so object and headers but these were not found.
[WARNING] async_io: please install the libaio-dev package with apt
[WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0
[WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/torch']
torch version .................... 2.0.1+cu118
deepspeed install path ........... ['/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/deepspeed']
deepspeed info ................... 0.10.3, unknown, unknown
torch cuda version ............... 11.8
torch hip version ................ None
nvcc version ..................... 11.7
deepspeed wheel compiled w. ...... torch 2.0, cuda 11.8
shared memory (/dev/shm) size .... 560.91 GB
System info (please complete the following information):
- OS: Ubuntu 20.04.5
- GPU count and types: One node of 8 A100s
- DeepSpeed: 0.10.3
- Hugging Face Transformers: 4.33.2 with the small change described above
- Accelerate: 0.23.0
- Python: 3.8.16
Thanks in advance!
Describe the bug Hi, I am having a bug with traceback
Invalidate trace cache @ step 2: expected module 4, but got module 2 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/transformers/generation/utils.py", line 1602, in generate return self.greedy_search( File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/transformers/generation/utils.py", line 2450, in greedy_search outputs = self( File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1547, in _call_impl hook_result = hook(self, args, result) File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn ret_val = func(*args, **kwargs) File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 350, in _end_of_forward_hook self.get_param_coordinator(training=False).reset_step() File "/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 203, in reset_step raise RuntimeError(f"still have inflight params " RuntimeError: still have inflight params [{'id': 4, 'status': 'AVAILABLE', 'numel': 32, 'ds_numel': 32, 'shape': (32,), 'ds_shape': (32,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([32])}, {'id': 17, 'status': 'AVAILABLE', 'numel': 1184, 'ds_numel': 1184, 'shape': (37, 32), 'ds_shape': (37, 32), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([1184])}, {'id': 47, 'status': 'AVAILABLE', 'numel': 32, 'ds_numel': 32, 'shape': (32,), 'ds_shape': (32,), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': set(), 'ds_tensor.shape': torch.Size([32])}, {'id': 64, 'status': 'AVAILABLE', 'numel': 32, 'ds_numel': 32, 'shape': (32,), ...
when trying to do inference with the model IDEFICS with Zero 3 (also reproducible with a tiny model).
To Reproduce Please, first install
transformers
versiontransformers-4.33.2
, and then make the following change: intransformers/src/transformers/models/idefics/modeling_idefics.py
, at line 431, replaceadditional_features = F.linear(input, self.additional_fc.weight, self.additional_fc.bias)
withadditional_features = self.additional_fc(input)
(otherwise there is another bug). I have last DeepSpeed version v0.10.3.Now, the following script gives the error
import deepspeed import torch from transformers import AutoProcessor, IdeficsForVisionText2Text from transformers.integrations import HfDeepSpeedConfig checkpoint = "HuggingFaceM4/tiny-random-idefics" device = "cuda" if torch.cuda.is_available() else "cpu" # Define the config before the model ds_config = { "communication_data_type": "fp32", "bf16": {"enabled": True}, "zero_optimization": { "stage": 3, "overlap_comm": False, "reduce_bucket_size": "auto", "contiguous_gradients": True, "stage3_gather_16bit_weights_on_model_save": False, "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 2e9, "stage3_max_reuse_distance": 2e9, "offload_optimizer": {"device": "none"}, "offload_param": {"device": "none"}, }, "gradient_clipping": "auto", "train_batch_size": 32, "steps_per_print": 2000000, } dschf = HfDeepSpeedConfig(ds_config) model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16) processor = AutoProcessor.from_pretrained(checkpoint) engine = deepspeed.initialize(model=model, config_params=ds_config) prompts = [ [ "User: What is in this image?", "https://upload.wikimedia.org/wikipedia/commons/8/86/Id%C3%A9fix.JPG", "<end_of_utterance>", ( "\nAssistant: This picture depicts Idefix, the dog of Obelix in Asterix and Obelix. Idefix is running on" " the ground.<end_of_utterance>" ), "\nUser:", "https://static.wikia.nocookie.net/asterix/images/2/25/R22b.gif/revision/latest?cb=20110815073052", "And who is that?<end_of_utterance>", "\nAssistant:", ], ] inputs = processor(prompts, add_end_of_utterance_token=False, return_tensors="pt").to(device) exit_condition = processor.tokenizer("<end_of_utterance>", add_special_tokens=False).input_ids bad_words_ids = processor.tokenizer(["<image>", "<fake_token_around_image>"], add_special_tokens=False).input_ids generated_ids = model.generate(**inputs, eos_token_id=exit_condition, bad_words_ids=bad_words_ids, max_length=100)
Expected behavior I would expect the function
generate
to return the new tokens.ds_report output
[2023-09-26 23:14:19,214] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) /fsx/m4/conda/hugo_2/lib/python3.8/site-packages/accelerate/utils/imports.py:197: UserWarning: `ACCELERATE_DISABLE_RICH` is deprecated and will be removed in v0.22.0 and deactivated by default. Please use `ACCELERATE_ENABLE_RICH` if you wish to use `rich`. warnings.warn( -------------------------------------------------- DeepSpeed C++/CUDA extension op report -------------------------------------------------- NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op. -------------------------------------------------- JIT compiled ops requires ninja ninja .................. [OKAY] -------------------------------------------------- op name ................ installed .. compatible -------------------------------------------------- [WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. async_io ............... [NO] ....... [NO] fused_adam ............. [NO] ....... [OKAY] cpu_adam ............... [NO] ....... [OKAY] cpu_adagrad ............ [NO] ....... [OKAY] fused_lamb ............. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0 [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] transformer_inference .. [NO] ....... [OKAY] -------------------------------------------------- DeepSpeed general environment info: torch install path ............... ['/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/torch'] torch version .................... 2.0.1+cu118 deepspeed install path ........... ['/fsx/m4/conda/hugo_2/lib/python3.8/site-packages/deepspeed'] deepspeed info ................... 0.10.3, unknown, unknown torch cuda version ............... 11.8 torch hip version ................ None nvcc version ..................... 11.7 deepspeed wheel compiled w. ...... torch 2.0, cuda 11.8 shared memory (/dev/shm) size .... 560.91 GB
System info (please complete the following information):
- OS: Ubuntu 20.04.5
- GPU count and types: One node of 8 A100s
- DeepSpeed: 0.10.3
- Hugging Face Transformers: 4.33.2 with the small change described above
- Accelerate: 0.23.0
- Python: 3.8.16
Thanks in advance!
我也遇到了同样的问题,请问您解决了嘛?
同样,同问。
遇到同样的问题,deepspeed是0.14.2