OOM when use InternVL2_5-1B-MPO
I followed the installation guide to build mldeploy (0.7.0.post3) from source. Inference using the PyTorch engine works fine. However, after quantizing the model to 4-bit using AWQ, I encountered an OOM error when loading the model with the TurboMind engine. I try to set "session_len=2048" in TurbomindEngineConfig.
Can you share the following information?
- running
lmdeploy check_env - the reproducible code
I will get the error when I run lmdeploy check_env because I built lmdeploy on Jetson Orin.
Below is the code,
from lmdeploy import pipeline, TurbomindEngineConfig, PytorchEngineConfig
pipe = pipeline("./InternVL2_5-1B-MPO-4bit/", backend_config=TurbomindEngineConfig(model_format="awq", session_len=2048))
I run lmdeploy lite auto_awq OpenGVLab/InternVL2_5-1B-MPO --work-dir InternVL2_5-1B-MPO-4bit to quantize model.
Can you help open INFO log level? Let's check what the log indicates
from lmdeploy import pipeline, TurbomindEngineConfig, PytorchEngineConfig
pipe = pipeline("./InternVL2_5-1B-MPO-4bit/", backend_config=TurbomindEngineConfig(model_format="awq",session_len=2048), log_level='INFO')
What's the mem size of jetson orin?
GPU memory size is 16GB, and I uploaded the log file. log.txt
Did you build lmdeploy from the source? The default prebuilt package works for x86_64 platform rather than aarch64 platform
Yes, I built LMDeploy from source. By default, BUILD_MULTI_GPU is set to ON, but I modified it to OFF because there is only one GPU on the Jetson.
Sure. @lzhangzz do you have any clue?
seem similar to me https://github.com/InternLM/lmdeploy/issues/3006
Thank you for your sharing. I want to test InternVL2.5, I can't downgrade to v0.4.0. @lzhangzz Do you have any clues on how to solve this issue?
From the log, the OOM is triggered at tuning stage. The most relevant option is --max-prefill-token-num, the default value 8192. To start with, try to decrease it to 2048.
You may also want to decrease --cache-max-entry-count to 0.5 or even 0.25, as allocation for KV cache precedes intermediate buffers.
I still encountered the same error after adding max_prefill_token_num=2048.
However, when I also addedcache_max_entry_count=0.25, the error did not occur, but I received no kernel image is available for execution on the device. Then, inference resulted in a new error.
Does no kernel image is available for execution on the device indicate that lmdeploy 0.7.0-post3 is not supported on Jetson?
I upload two log files( log_1.txt log_2.txt ), log_1 uses max_prefill_token_num, while log_2 uses both.
i meet same question。my code is from lmdeploy import pipeline, TurbomindEngineConfig,ChatTemplateConfig from lmdeploy.vl import load_image #CUDA_VISIBLE_DEVICES="" #/storage/tool/InternVL2-2B/OpenGVLab/InternVL2-2B model = '/storage/tools/InterVL2_5_2B-autoawq' image = load_image('/storage/tiger.jpeg') from lmdeploy import pipeline, TurbomindEngineConfig, PytorchEngineConfig pipe = pipeline(model, backend_config=TurbomindEngineConfig(model_format="awq",session_len=2048 ,cache_max_entry_count = 0.1), log_level='INFO', chat_template_config = ChatTemplateConfig('internvl2_5')) response = pipe(('describe this image', image)) print(response.text)
error is no kernel image is available for execution on the device no kernel image is available for execution on the device no kernel image is available for execution on the device no kernel image is available for execution on the device no kernel image is available for execution on the device no kernel image is available for execution on the device
have you solve it?
No, MLDeploy seems to have limited compatibility with Jetson.