lmdeploy icon indicating copy to clipboard operation
lmdeploy copied to clipboard

OOM when use InternVL2_5-1B-MPO

Open BobHo5474 opened this issue 10 months ago • 14 comments

I followed the installation guide to build mldeploy (0.7.0.post3) from source. Inference using the PyTorch engine works fine. However, after quantizing the model to 4-bit using AWQ, I encountered an OOM error when loading the model with the TurboMind engine. I try to set "session_len=2048" in TurbomindEngineConfig.

BobHo5474 avatar Feb 14 '25 08:02 BobHo5474

Can you share the following information?

  • running lmdeploy check_env
  • the reproducible code

lvhan028 avatar Feb 14 '25 09:02 lvhan028

I will get the error when I run lmdeploy check_env because I built lmdeploy on Jetson Orin.

Below is the code, from lmdeploy import pipeline, TurbomindEngineConfig, PytorchEngineConfig pipe = pipeline("./InternVL2_5-1B-MPO-4bit/", backend_config=TurbomindEngineConfig(model_format="awq", session_len=2048))

I run lmdeploy lite auto_awq OpenGVLab/InternVL2_5-1B-MPO --work-dir InternVL2_5-1B-MPO-4bit to quantize model.

BobHo5474 avatar Feb 14 '25 09:02 BobHo5474

Can you help open INFO log level? Let's check what the log indicates

from lmdeploy import pipeline, TurbomindEngineConfig, PytorchEngineConfig
pipe = pipeline("./InternVL2_5-1B-MPO-4bit/", backend_config=TurbomindEngineConfig(model_format="awq",session_len=2048), log_level='INFO')

lvhan028 avatar Feb 14 '25 09:02 lvhan028

What's the mem size of jetson orin?

lvhan028 avatar Feb 14 '25 09:02 lvhan028

GPU memory size is 16GB, and I uploaded the log file. log.txt

BobHo5474 avatar Feb 14 '25 09:02 BobHo5474

Did you build lmdeploy from the source? The default prebuilt package works for x86_64 platform rather than aarch64 platform

lvhan028 avatar Feb 17 '25 04:02 lvhan028

Yes, I built LMDeploy from source. By default, BUILD_MULTI_GPU is set to ON, but I modified it to OFF because there is only one GPU on the Jetson.

BobHo5474 avatar Feb 17 '25 04:02 BobHo5474

Sure. @lzhangzz do you have any clue?

lvhan028 avatar Feb 17 '25 05:02 lvhan028

seem similar to me https://github.com/InternLM/lmdeploy/issues/3006

quanfeifan avatar Feb 22 '25 08:02 quanfeifan

Thank you for your sharing. I want to test InternVL2.5, I can't downgrade to v0.4.0. @lzhangzz Do you have any clues on how to solve this issue?

BobHo5474 avatar Feb 24 '25 01:02 BobHo5474

From the log, the OOM is triggered at tuning stage. The most relevant option is --max-prefill-token-num, the default value 8192. To start with, try to decrease it to 2048.

You may also want to decrease --cache-max-entry-count to 0.5 or even 0.25, as allocation for KV cache precedes intermediate buffers.

lzhangzz avatar Feb 25 '25 06:02 lzhangzz

I still encountered the same error after adding max_prefill_token_num=2048. However, when I also addedcache_max_entry_count=0.25, the error did not occur, but I received no kernel image is available for execution on the device. Then, inference resulted in a new error. Does no kernel image is available for execution on the device indicate that lmdeploy 0.7.0-post3 is not supported on Jetson?

I upload two log files( log_1.txt log_2.txt ), log_1 uses max_prefill_token_num, while log_2 uses both.

BobHo5474 avatar Feb 26 '25 10:02 BobHo5474

i meet same question。my code is from lmdeploy import pipeline, TurbomindEngineConfig,ChatTemplateConfig from lmdeploy.vl import load_image #CUDA_VISIBLE_DEVICES="" #/storage/tool/InternVL2-2B/OpenGVLab/InternVL2-2B model = '/storage/tools/InterVL2_5_2B-autoawq' image = load_image('/storage/tiger.jpeg') from lmdeploy import pipeline, TurbomindEngineConfig, PytorchEngineConfig pipe = pipeline(model, backend_config=TurbomindEngineConfig(model_format="awq",session_len=2048 ,cache_max_entry_count = 0.1), log_level='INFO', chat_template_config = ChatTemplateConfig('internvl2_5')) response = pipe(('describe this image', image)) print(response.text)

error is no kernel image is available for execution on the device no kernel image is available for execution on the device no kernel image is available for execution on the device no kernel image is available for execution on the device no kernel image is available for execution on the device no kernel image is available for execution on the device

have you solve it?

jerry-dream-fu avatar May 12 '25 06:05 jerry-dream-fu

No, MLDeploy seems to have limited compatibility with Jetson.

BobHo5474 avatar May 13 '25 03:05 BobHo5474