optimum icon indicating copy to clipboard operation
optimum copied to clipboard

ORTModelForCausalLM inference fails (after converting transformer to ONNX)

Open ingo-m opened this issue 1 year ago • 4 comments

System Info

The bug as described below occurs locally on my system with the following specs, and on google colab (see below for reproducible example):

- System: Ubuntu 22.04.3 LTS
- Kernel: 6.5.0-15-generic
- NVIDIA Driver Version: 525.147.05
- CUDA Version: 12.0
- Python: 3.10.13
- torch==2.2.0
- transformers==4.37.1
- onnxruntime-gpu==1.17.0
- optimum[onnxruntime-gpu]==1.16.2

Who can help?

@michaelbenayoun (error happens with a transformer model converted to ONNX) @JingyaHuang (error seems to be related to ONNX runtime)

Information

  • [ ] The official example scripts
  • [X] My own modified scripts

Tasks

  • [ ] An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • [X] My own task or dataset (give details below)

Reproduction (minimal, reproducible, runnable)

The bug is described below, here is a reproducible example: https://colab.research.google.com/drive/1QZ4_vttj-r5D3fwff49KZ0gzqwB5BRuM?usp=sharing

Expected behavior

I am trying to convert a transformer model ("bigscience/bloomz-560m") to ONNX format, and then perform inference with the ONNX model.

I was previously able to do this, with the following library versions:

torch==2.0.1
transformers==4.30.2
onnxruntime-gpu==1.15.1
optimum[onnxruntime-gpu]==1.9.1

However, after upgrading to the latest versions, performing inference with the ONNX model fails. These are the version I upgraded to:

torch==2.2.0
transformers==4.37.1
onnxruntime-gpu==1.17.0
optimum[onnxruntime-gpu]==1.16.2

Now, when trying to perform inference, I get this error:

RuntimeError: Error when binding input: There's no data transfer registered for copying tensors from Device:[DeviceType:1 MemoryType:0 DeviceId:0] to Device:[DeviceType:0 MemoryType:0 DeviceId:0]

When running locally, I additionally get this message in the error traceback (I don't get this on colab):

Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory

The weird thing is that (when running locally) the respective virtual env does actually have libcublasLt.so.11 (in my case at ~/miniconda3/envs/py-onnx/lib/python3.10/site-packages/nvidia/cublas/lib):

.
├── __init__.py
├── libcublasLt.so.11
├── libcublasLt.so.12
├── libcublas.so.11
├── libcublas.so.12
├── libnvblas.so.11
└── libnvblas.so.12

So the cuda library cannot be found, although it is there? And why does it want to use libcublasLt.so.11 (and not libcublasLt.so.12)? 🤔

According to this issue, onnxruntime 1.17.0 does support CUDA 12. My CUDA version is 12.0 (which I didn't change).

ingo-m avatar Feb 02 '24 16:02 ingo-m

Hi @ingo-m, thank you for the report.

Locally, how did you install onnxruntime-gpu? The wheel hosted on PyPI index is for CUDA 11.8. https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html gives instructions on how to install ORT CUDA EP for CUDA 12.1.

Not sure it will work, but you can also try export LD_LIBRARY_PATH = $LD_LIBRARY_PATH:/path/to/miniconda3/envs/py-onnx/lib/python3.10/site-packages/nvidia/cublas/lib

Regarding the

RuntimeError: Error when binding input: There's no data transfer registered for copying tensors from Device:[DeviceType:1 MemoryType:0 DeviceId:0] to Device:[DeviceType:0 MemoryType:0 DeviceId:0]

I'm not sure yet, will investigate.

fxmarty avatar Feb 05 '24 11:02 fxmarty

@ingo-m I can not reproduce the issue with:

import torch
from optimum.onnxruntime import ORTModelForCausalLM
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model_name = "bigscience/bloomz-560m"
device_name = "cuda"

tokenizer = AutoTokenizer.from_pretrained(base_model_name)

ort_model = ORTModelForCausalLM.from_pretrained(
    base_model_name,
    use_io_binding=True,
    export=True,
    provider="CUDAExecutionProvider",
)

prompt = "i like pancakes"
inference_ids = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to(
    device_name
)

# Try to generate a prediction (fails).
output_ids = ort_model.generate(
    input_ids=inference_ids["input_ids"],
    attention_mask=inference_ids["attention_mask"],
    max_new_tokens=512,
    temperature=1e-8,
    do_sample=True,
)

print(tokenizer.decode(output_ids[0], skip_special_tokens=True))

with CUDA 11.8, torch==2.1.2+cu118, optimum==1.16.2, onnxruntime-gpu==1.17.0, onnx==1.15.0.

fxmarty avatar Feb 05 '24 12:02 fxmarty

@fxmarty thanks for looking into it.

Locally, I installed directly from PyPI (with pipenv). In other words, I did not follow the specific instructions for CUDA 12, so that explains the problem. (However, it's strange that I had no problems with CUDA 12 when I was still using the older version optimum[onnxruntime-gpu]==1.9.1 🤔).

On google colab, !nvidia-smi reveals that it's using CUDA 12 as well (this is a free tier colab instance):

Mon Feb  5 12:59:37 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05             Driver Version: 535.104.05   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  Tesla T4                       Off | 00000000:00:04.0 Off |                    0 |
| N/A   61C    P8              10W /  70W |      0MiB / 15360MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+

As you said, it looks like CUDA 12 is the culprit.

ingo-m avatar Feb 05 '24 13:02 ingo-m

Regarding this error:

RuntimeError: Error when binding input: There's no data transfer registered for copying tensors from Device:[DeviceType:1 MemoryType:0 DeviceId:0] to Device:[DeviceType:0 MemoryType:0 DeviceId:0]

Perhaps the ORTModelForCausalLM model was not placed on the GPU for inference (because the CUDAExecutionProvider didn't work because of the issue with CUDA 12), but the tokens were placed on the GPU, and then the error occurs because model and tokens are not on the same device?

ingo-m avatar Feb 05 '24 13:02 ingo-m