DeepSpeed icon indicating copy to clipboard operation
DeepSpeed copied to clipboard

[BUG] GPT-J InferenceEngine Initialization Failure: `RuntimeError`

Open joehoover opened this issue 2 years ago • 3 comments

Describe the bug Initializing an InferenceEngine for GPT-J fails with the following error:

RuntimeError: view size is not compatible with input tensor's size and stride (at 
least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

To Reproduce Steps to reproduce the behavior:

  1. Install packages:
pip3 install torch==1.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install transformers==4.18.0
pip install deepspeed==0.6.4
  1. Run the following:
import os
import deepspeed
from transformers import AutoModelForCausalLM, AutoTokenizer

# Get local gpu rank from torch.distributed/deepspeed launcher
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))

model = AutoModelForCausalLM.from_pretrained(
    "EleutherAI/gpt-j-6B", revision="float16", torch_dtype=torch.float16, low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")

model = deepspeed.init_inference(model,
                                 mp_size=1,
                                 dtype=torch.float16,
                                 replace_method='auto',
                                 replace_with_kernel_inject=True)

Expected behavior This should initialize a DeepSpeed GPT-J InferenceEngine.

ds_report output

--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
      runtime if needed. Op compatibility means that your system
      meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
fused_lamb ............. [NO] ....... [OKAY]
 [WARNING]  please install triton==1.0.0 if you want to use sparse attention
sparse_attn ............ [NO] ....... [NO]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
 [WARNING]  async_io requires the dev libaio .so object and headers but these were not found.
 [WARNING]  async_io: please install the libaio-devel package with yum
 [WARNING]  If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found.
async_io ............... [NO] ....... [NO]
utils .................. [NO] ....... [OKAY]
quantizer .............. [NO] ....... [OKAY]
transformer_inference .. [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/torch']
torch version .................... 1.11.0+cu113
torch cuda version ............... 11.3
torch hip version ................ None
nvcc version ..................... 11.1
deepspeed install path ........... ['/home/ec2-user/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/deepspeed']
deepspeed info ................... 0.6.4, unknown, unknown
deepspeed wheel compiled w. ...... torch 1.10, cuda 11.1

System info (please complete the following information):

  • OS: AWS SageMaker Notebook instance
  • GPU count and types: 1 Nvidia 16GB T4 GPU.
  • Python version: 3.8

Launcher context No launcher, just running in a notebook.

Additional context This exact code was working until recently. I'm currently rolling back versions.

joehoover avatar May 10 '22 14:05 joehoover

Hi @joehoover,

I added this PR, can you please try it and let me know if it works on your side? Thanks, Reza

RezaYazdaniAminabadi avatar May 12 '22 20:05 RezaYazdaniAminabadi

@joehoover, can you give it a try now with this PR linked above? I think we have this fixed now.

jeffra avatar May 24 '22 17:05 jeffra

@jeffra , sorry for the delay. I just confirmed that I am able to initialize the InferenceEngine.

Thanks!

unfortunately, I'm now noticing strongly divergences between Transformers GPT-J outputs and GPT-J InferenceEngine outputs. I'll open another issue, but just for reference:

import os
import deepspeed
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Get local gpu rank from torch.distributed/deepspeed launcher
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))

model = AutoModelForCausalLM.from_pretrained(    
   "EleutherAI/gpt-j-6B", revision="float16", 
   torch_dtype=torch.float16,
   low_cpu_mem_usage=True
)

tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)

pipe(
    "All happy families are alike, but ",
    do_sample=False,
)

Returns:

[{'generated_text': 'All happy families are alike, but \nevery unhappy family is unhappy in its own way.\n\n—LEWIS CARROLL\n\n## THE BEGINNING\n\nLet the conversation begin...\n\nFollow the Penguin'}]

However,

model = deepspeed.init_inference(model,
                                 mp_size=1,
                                 dtype=torch.float16,
                                 replace_method='auto',
                                 replace_with_kernel_inject=True)

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)

pipe(
    "All happy families are alike, but ",
    do_sample=False,
)

Returns:

[{'generated_text': 'All happy families are alike, but --\n\n-\n-\n-\n-\n-\n-\n-\n-\n-\n-\n-\n-\n-\n-\n-\n-\n-\nw\nw\n'}]

joehoover avatar Jun 23 '22 19:06 joehoover