DeepSpeed icon indicating copy to clipboard operation
DeepSpeed copied to clipboard

[BUG] Unpopulated entries in transformer `key` and `value`

Open tomeras91 opened this issue 3 years ago • 1 comments

Describe the bug Running a forward pass on a DeepSpeedTransformerInference layer, when setting get_present=True, results in wrong outputs for key and value. These tensors don't hold the correct self attention keys and values for that layer. Instead, they contain a lot of zeros.

To Reproduce Here is a minimal reproducible example that shows the bug:

from deepspeed.ops.transformer import DeepSpeedInferenceConfig, DeepSpeedTransformerInference
import torch

torch.cuda.set_device(0)

hidden_size = 256
heads = 8
num_layers = 12
fp16 = True
layernorm_epsilon = 1e-5
deepspeed_config = DeepSpeedInferenceConfig(hidden_size=hidden_size,
                                            intermediate_size=hidden_size * 4,
                                            heads=heads,
                                            num_hidden_layers=num_layers,
                                            layer_norm_eps=layernorm_epsilon,
                                            fp16=fp16,
                                            pre_layer_norm=True,
                                            stochastic_mode=False,
                                            scale_attention=True,
                                            triangular_masking=True,
                                            local_attention=False,
                                            window_size=256,
                                            )
transformer = DeepSpeedTransformerInference(config=deepspeed_config)
transformer.half()
new_state_dict = {k: 0.01*torch.ones(*v.shape, dtype=v.dtype, device=v.device)
                  for k,v in transformer.state_dict().items()}
transformer.load_state_dict(new_state_dict)
transformer.cuda()
device = list(transformer.parameters())[0].device

batch_size = 1
seq_len = 4
inputs = torch.ones((batch_size, seq_len, hidden_size), dtype=torch.float16, device=device)
input_mask = torch.ones(*inputs.shape[:2], dtype=bool, device=device)

output, (key, value) = transformer(
    input=inputs,
    input_mask=input_mask,
    get_present=True)

print(f"outupt: \n {output}")
print("#"*20)
print(f"key: \n {key}")
print("#"*20)
print(f"value: \n {value}")

The output I got is:

outupt: 
 tensor([[[1.3154, 1.3154, 1.3154,  ..., 1.3154, 1.3154, 1.3154],
         [1.3154, 1.3154, 1.3154,  ..., 1.3154, 1.3154, 1.3154],
         [1.3154, 1.3154, 1.3154,  ..., 1.3154, 1.3154, 1.3154],
         [1.3154, 1.3154, 1.3154,  ..., 1.3154, 1.3154, 1.3154]]],
       device='cuda:0', dtype=torch.float16)
####################
key: 
 tensor([[[0.0356, 0.0356, 0.0356,  ..., 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.0000, 0.0000]]],
       device='cuda:0', dtype=torch.float16)
####################
value: 
 tensor([[[0.0356, 0.0356, 0.0356,  ..., 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.0000, 0.0000],
         [0.0000, 0.0000, 0.0000,  ..., 0.0000, 0.0000, 0.0000]]],
       device='cuda:0', dtype=torch.float16)

Expected behavior I was expecting to get key and values tensors with all entries populated, instead of just the first few. For example, here is the output I got when using DeepSpeed v0.4.1:

outupt: 
 tensor([[[1.3154, 1.3154, 1.3154,  ..., 1.3154, 1.3154, 1.3154],
         [1.3154, 1.3154, 1.3154,  ..., 1.3154, 1.3154, 1.3154],
         [1.3154, 1.3154, 1.3154,  ..., 1.3154, 1.3154, 1.3154],
         [1.3154, 1.3154, 1.3154,  ..., 1.3154, 1.3154, 1.3154]]],
       device='cuda:0', dtype=torch.float16,
       grad_fn=<DeepSpeedMLPFunctionBackward>)
####################
key: 
 tensor([[[0.0356, 0.0356, 0.0356,  ..., 0.0356, 0.0356, 0.0356],
         [0.0356, 0.0356, 0.0356,  ..., 0.0356, 0.0356, 0.0356],
         [0.0356, 0.0356, 0.0356,  ..., 0.0356, 0.0356, 0.0356],
         [0.0356, 0.0356, 0.0356,  ..., 0.0356, 0.0356, 0.0356]]],
       device='cuda:0', dtype=torch.float16,
       grad_fn=<DeepSpeedSelfAttentionFunctionBackward>)
####################
value: 
 tensor([[[0.0356, 0.0356, 0.0356,  ..., 0.0356, 0.0356, 0.0356],
         [0.0356, 0.0356, 0.0356,  ..., 0.0356, 0.0356, 0.0356],
         [0.0356, 0.0356, 0.0356,  ..., 0.0356, 0.0356, 0.0356],
         [0.0356, 0.0356, 0.0356,  ..., 0.0356, 0.0356, 0.0356]]],
       device='cuda:0', dtype=torch.float16,
       grad_fn=<DeepSpeedSelfAttentionFunctionBackward>)

ds_report output

[2022-06-28 09:02:42,594] [WARNING] [partition_parameters.py:60:<module>] unable to find torch.distributed._all_gather_base. will fall back to torch.distributed.all_gather which will result in suboptimal performance. please consider upgrading your pytorch installation.
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
 [WARNING]  using untested triton version (1.1.1), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-dev package with apt
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
utils .................. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/opt/conda/lib/python3.8/site-packages/torch']
torch version .................... 1.8.0a0+1606899
torch cuda version ............... 11.1
torch hip version ................ None
nvcc version ..................... 11.1
deepspeed install path ........... ['/opt/conda/lib/python3.8/site-packages/deepspeed']
deepspeed info ................... 0.6.5, unknown, unknown
deepspeed wheel compiled w. ...... torch 1.8, cuda 11.1

System info (please complete the following information):

  • OS: Ubuntu 20.04
  • GPU count and types: a single A100 GPU
  • Python version: 3.8.5

Launcher context Launching directly using Python interpreter.

Additional context I think the bug is caused by line 362 in csrc/transformer/inference/csrc/pt_binding.cpp (or something in its vicinity):

size_t offset =
        16 * (hidden_dim * bsz * MAX_OUT_TOKES) + layer_id * 2 * bsz * MAX_OUT_TOKES * hidden_dim;

tomeras91 avatar Jun 28 '22 09:06 tomeras91

@mrwyattii Hey! Can you please take a look at this?

tomeras91 avatar Jul 04 '22 09:07 tomeras91

@tomeras91 is this still an issue? if so please re-open.

jeffra avatar Dec 02 '22 19:12 jeffra