lmdeploy icon indicating copy to clipboard operation
lmdeploy copied to clipboard

[Bug] 开启prefix cache后,有时对于同prompt推理的结果不一样

Open poorpool opened this issue 9 months ago • 9 comments

Checklist

  • [x] 1. I have searched related issues but cannot get the expected help.
  • [x] 2. The bug has not been fixed in the latest version.
  • [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

开启prefix cache后,对于同样的prompt X,Qwen2.5-7B-Instruct推理的结果不一样(第一次是结果A,第二、三、四次是结果B)。然而,将Qwen2.5-7B-Instruct换成InternLM2.5-7B-Chat,每次推理的结果又一样了。

还是这个prompt X。如果我将他放到一个batch中,和其他prompt一起推理,可以得到结果A;随后又单独推理这个prompt X,有时得到结果A、有时得到结果B。

我设置了do_sample=False,所以是greedy采样。

请问:开启prefix cache以后、或者组batch以后,对于同一个prompt推理的结果不是绝对稳定的。这个是正常现象吗?

Reproduction

使用LMDeploy v0.7.2.post1,Pytorch引擎,CUDA环境。运行如下的脚本:

from lmdeploy import pipeline
from lmdeploy import PytorchEngineConfig, GenerationConfig
import longprompt # 一个本地文件,里面有很长的字符串

if __name__ == "__main__":

    pipe = pipeline(
        "/data/llm/Qwen2.5-7B-Instruct/",
        # "/data/llm/internlm2_5-7b-chat/",
        backend_config=PytorchEngineConfig(
            tp=1,
            # device_type="ascend",
            dtype='float16',
            enable_prefix_caching=True,
            eager_mode=True,
            cache_max_entry_count=0.9))
    # 读取 longprompt.py 文件中的 text 变量
    text = longprompt.harrytext # 一个很长的字符串,因篇幅

    generation_config = GenerationConfig(max_new_tokens=50,
                                         temperature=0.0,
                                         ignore_eos=True,
                                         random_seed=42,
                                         top_k=1,
                                         output_logits='generation',
                                         logprobs=10)

    print("===============================================")
    response = pipe(text, gen_config=generation_config)
    print(response)
    print("===============================================")
    response = pipe(text, gen_config=generation_config)
    print(response)
    print("===============================================")
    response = pipe(text, gen_config=generation_config)
    print(response)
    print("===============================================")
    response = pipe(text, gen_config=generation_config)
    print(response)
    print("===============================================")

得到输出。可以看到,即使prompt相同,几次推理的response也是不同的:

Response(text='This passage from "Harry Potter and the Sorcerer\'s Stone" by J.K. Rowling sets the stage for the story, introducing the Dursleys and their secret about their sister and her family. It also hints at the larger magical world beyond the', generate_token_len=50, input_token_len=4494, finish_reason='length', token_ids=[1986, 21085, 504, 330, 41298, 29327, 323, 279, 29531, 68881, 594, 14302, 1, 553, 619, 11352, 13, 95507, 7289, 279, 6430, 369, 279, 3364, 11, 31918, 279, 422, 1723, 47679, 323, 862, 6234, 911, 862, 12923, 323, 1059, 2997, 13, 1084, 1083, 30643, 518, 279, 8131, 23702, 1879, 7797, 279], logprobs=None, logits=None, last_hidden_state=None, index=0)
===============================================
Response(text='This passage from "Harry Potter and the Sorcerer\'s Stone" by J.K. Rowling introduces the reader to the Dursley family and sets the stage for the unfolding story. Here are some key points and themes:\n\n1. **The Durs', generate_token_len=50, input_token_len=4494, finish_reason='length', token_ids=[1986, 21085, 504, 330, 41298, 29327, 323, 279, 29531, 68881, 594, 14302, 1, 553, 619, 11352, 13, 95507, 38919, 279, 6604, 311, 279, 422, 1723, 3179, 2997, 323, 7289, 279, 6430, 369, 279, 32731, 3364, 13, 5692, 525, 1045, 1376, 3501, 323, 21386, 1447, 16, 13, 3070, 785, 422, 1723], logprobs=None, logits=None, last_hidden_state=None, index=0)
===============================================
Response(text='This passage from "Harry Potter and the Sorcerer\'s Stone" by J.K. Rowling introduces the reader to the Dursley family and sets the stage for the unfolding story. Here are some key points and themes:\n\n1. **The Durs', generate_token_len=50, input_token_len=4494, finish_reason='length', token_ids=[1986, 21085, 504, 330, 41298, 29327, 323, 279, 29531, 68881, 594, 14302, 1, 553, 619, 11352, 13, 95507, 38919, 279, 6604, 311, 279, 422, 1723, 3179, 2997, 323, 7289, 279, 6430, 369, 279, 32731, 3364, 13, 5692, 525, 1045, 1376, 3501, 323, 21386, 1447, 16, 13, 3070, 785, 422, 1723], logprobs=None, logits=None, last_hidden_state=None, index=0)
===============================================
Response(text='This passage from "Harry Potter and the Sorcerer\'s Stone" by J.K. Rowling introduces the reader to the Dursley family and sets the stage for the unfolding story. Here are some key points and themes:\n\n1. **The Durs', generate_token_len=50, input_token_len=4494, finish_reason='length', token_ids=[1986, 21085, 504, 330, 41298, 29327, 323, 279, 29531, 68881, 594, 14302, 1, 553, 619, 11352, 13, 95507, 38919, 279, 6604, 311, 279, 422, 1723, 3179, 2997, 323, 7289, 279, 6430, 369, 279, 32731, 3364, 13, 5692, 525, 1045, 1376, 3501, 323, 21386, 1447, 16, 13, 3070, 785, 422, 1723], logprobs=None, logits=None, last_hidden_state=None, index=0)
===============================================

然而,如果我将模型换成/data/llm/internlm2_5-7b-chat/,则几次推理都会得到相同的结果:

Response(text="The story you've shared is from the first book of the Harry Potter series, written by J.K. Rowling. It introduces us to the Dursleys, a seemingly ordinary family who are deeply prejudiced against anything magical. Their fear of the", generate_token_len=50, input_token_len=5048, finish_reason='length', token_ids=[918, 3528, 629, 3168, 6247, 505, 635, 410, 1300, 2461, 446, 410, 14072, 29625, 4169, 328, 5485, 684, 751, 11504, 281, 10949, 2880, 281, 1226, 38726, 732, 442, 410, 553, 1874, 47062, 328, 395, 22961, 19275, 3161, 1015, 657, 17388, 33890, 7713, 2501, 4271, 24063, 281, 11114, 8813, 446, 410], logprobs=None, logits=None, last_hidden_state=None, index=0)
===============================================
Response(text="The story you've shared is from the first book of the Harry Potter series, written by J.K. Rowling. It introduces us to the Dursleys, a seemingly ordinary family who are deeply prejudiced against anything magical. Their fear of the", generate_token_len=50, input_token_len=5048, finish_reason='length', token_ids=[918, 3528, 629, 3168, 6247, 505, 635, 410, 1300, 2461, 446, 410, 14072, 29625, 4169, 328, 5485, 684, 751, 11504, 281, 10949, 2880, 281, 1226, 38726, 732, 442, 410, 553, 1874, 47062, 328, 395, 22961, 19275, 3161, 1015, 657, 17388, 33890, 7713, 2501, 4271, 24063, 281, 11114, 8813, 446, 410], logprobs=None, logits=None, last_hidden_state=None, index=0)
===============================================
Response(text="The story you've shared is from the first book of the Harry Potter series, written by J.K. Rowling. It introduces us to the Dursleys, a seemingly ordinary family who are deeply prejudiced against anything magical. Their fear of the", generate_token_len=50, input_token_len=5048, finish_reason='length', token_ids=[918, 3528, 629, 3168, 6247, 505, 635, 410, 1300, 2461, 446, 410, 14072, 29625, 4169, 328, 5485, 684, 751, 11504, 281, 10949, 2880, 281, 1226, 38726, 732, 442, 410, 553, 1874, 47062, 328, 395, 22961, 19275, 3161, 1015, 657, 17388, 33890, 7713, 2501, 4271, 24063, 281, 11114, 8813, 446, 410], logprobs=None, logits=None, last_hidden_state=None, index=0)
===============================================
Response(text="The story you've shared is from the first book of the Harry Potter series, written by J.K. Rowling. It introduces us to the Dursleys, a seemingly ordinary family who are deeply prejudiced against anything magical. Their fear of the", generate_token_len=50, input_token_len=5048, finish_reason='length', token_ids=[918, 3528, 629, 3168, 6247, 505, 635, 410, 1300, 2461, 446, 410, 14072, 29625, 4169, 328, 5485, 684, 751, 11504, 281, 10949, 2880, 281, 1226, 38726, 732, 442, 410, 553, 1874, 47062, 328, 395, 22961, 19275, 3161, 1015, 657, 17388, 33890, 7713, 2501, 4271, 24063, 281, 11114, 8813, 446, 410], logprobs=None, logits=None, last_hidden_state=None, index=0)
===============================================

Environment

sys.platform: linux
Python: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb  6 2025, 18:56:27) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA A100-SXM4-40GB
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.6, V12.6.68
GCC: gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
PyTorch: 2.5.1+cu124
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.4
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 90.1
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.4, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.5.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,

TorchVision: 0.20.1+cu124
LMDeploy: 0.7.2.post1+
transformers: 4.49.0
gradio: Not Found
fastapi: 0.115.11
pydantic: 2.10.6
triton: 3.1.0
NVIDIA Topology:
        GPU0    NIC0    NIC1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     SYS     0-17,72-89      0               N/A
NIC0    SYS      X      PIX
NIC1    SYS     PIX      X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1

Error traceback


poorpool avatar Mar 23 '25 09:03 poorpool

@poorpool Hi, thanks for your feedback. This is interesting. Try to run with qwen2.7-7b, but can not reproduce it with the following prompt. Could share your long prompt as file if possible? THX.

 "Question: Angelo and Melanie want to plan how many hours over the next week they should study together for their test next week. They have 2 chapters of their textbook to study and 4 worksheets to memorize. They figure out that they should dedicate 3 hours to each chapter of their textbook and 1.5 hours for each worksheet. If they plan to study no more than 4 hours each day, how many days should they plan to study total over the next week if they take a 10-minute break every hour, include 3 10-minute snack breaks each day, and 30 minutes for lunch each day?\nLet's think step by step\nAnswer:\nAngelo and Melanie think they should dedicate 3 hours to each of the 2 chapters, 3 hours x 2 chapters = 6 hours total.\nFor the worksheets they plan to dedicate 1.5 hours for each worksheet, 1.5 hours x 4 worksheets = 6 hours total.\nAngelo and Melanie need to start with planning 12 hours to study, at 4 hours a day, 12 / 4 = 3 days.\nHowever, they need to include time for breaks and lunch. Every hour they want to include a 10-minute break, so 12 total hours x 10 minutes = 120 extra minutes for breaks.\nThey also want to include 3 10-minute snack breaks, 3 x 10 minutes = 30 minutes.\nAnd they want to include 30 minutes for lunch each day, so 120 minutes for breaks + 30 minutes for snack breaks + 30 minutes for lunch = 180 minutes, or 180 / 60 minutes per hour = 3 extra hours.\nSo Angelo and Melanie want to plan 12 hours to study + 3 hours of breaks = 15 hours total.\nThey want to study no more than 4 hours each day, 15 hours / 4 hours each day = 3.75\nThey will need to plan to study 4 days to allow for all the time they need.\nThe answer is 4\n\nQuestion: Mark's basketball team scores 25 2 pointers, 8 3 pointers and 10 free throws.  Their opponents score double the 2 pointers but half the 3 pointers and free throws.  What's the total number of points scored by both teams added together?\nLet's think step by step\nAnswer:\nMark's team scores 25 2 pointers, meaning they scored 25*2= 50 points in 2 pointers.\nHis team also scores 6 3 pointers, meaning they scored 8*3= 24 points in 3 pointers\nThey scored 10 free throws, and free throws count as one point so they scored 10*1=10 points in free throws.\nAll together his team scored 50+24+10= 84 points\nMark's opponents scored double his team's number of 2 pointers, meaning they scored 50*2=100 points in 2 pointers.\nHis opponents scored half his team's number of 3 pointers, meaning they scored 24/2= 12 points in 3 pointers.\nThey also scored half Mark's team's points in free throws, meaning they scored 10/2=5 points in free throws.\nAll together Mark's opponents scored 100+12+5=117 points\nThe total score for the game is both team's scores added together, so it is 84+117=201 points\nThe answer is 201\n\nQuestion: Bella has two times as many marbles as frisbees. She also has 20 more frisbees than deck cards. If she buys 2/5 times more of each item, what would be the total number of the items she will have if she currently has 60 marbles?\nLet's think step by step\nAnswer:\nWhen Bella buys 2/5 times more marbles, she'll have increased the number of marbles by 2/5*60 = 24\nThe total number of marbles she'll have is 60+24 = 84\nIf Bella currently has 60 marbles, and she has two times as many marbles as frisbees, she has 60/2 = 30 frisbees.\nIf Bella buys 2/5 times more frisbees, she'll have 2/5*30 = 12 more frisbees.\nThe total number of frisbees she'll have will increase to 30+12 = 42\nBella also has 20 more frisbees than deck cards, meaning she has 30-20 = 10 deck cards\nIf she buys 2/5 times more deck cards, she'll have 2/5*10 = 4 more deck cards.\nThe total number of deck cards she'll have is 10+4 = 14\nTogether, Bella will have a total of 14+42+84 = 140 items\nThe answer is 140\n\nQuestion: A group of 4 fruit baskets contains 9 apples, 15 oranges, and 14 bananas in the first three baskets and 2 less of each fruit in the fourth basket. How many fruits are there?\nLet's think step by step\nAnswer:\nFor the first three baskets, the number of apples and oranges in one basket is 9+15=24\nIn total, together with bananas, the number of fruits in one basket is 24+14=38 for the first three baskets.\nSince there are three baskets each having 38 fruits, there are 3*38=114 fruits in the first three baskets.\nThe number of apples in the fourth basket is 9-2=7\nThere are also 15-2=13 oranges in the fourth basket\nThe combined number of oranges and apples in the fourth basket is 13+7=20\nThe fourth basket also contains 14-2=12 bananas.\nIn total, the fourth basket has 20+12=32 fruits.\nThe four baskets together have 32+114=146 fruits.\nThe answer is 146\n\nQuestion: Marie ordered one chicken meal that costs $12, 5 packs of milk that costs $3 each, 4 apples that cost $1.50 each, and some boxes of pizza. Marie paid a total of $50. How many boxes of pizza did Marie order if each box costs $8.50?\nLet's think step by step\nAnswer:"

RunningLeon avatar Mar 24 '25 11:03 RunningLeon

Thank you! Sure, qwen_tests.zip This file includes my test scripts and outputs for your prompt (qwen_angelo.py for script and qwen_angelo.txt for output), and my long prompt (qwen_harry.py for script and qwen_harry.txt for output).

However, we can still reproduce the problem with these two test scripts in my environment (outputting A the first time, then B the second, third, and fourth times the prefix cache is turned on). Thank you for the quick reply^_^

poorpool avatar Mar 24 '25 13:03 poorpool

@RunningLeon 您好,请问这个bug有进一步的结果了吗,谢谢~

poorpool avatar Apr 02 '25 11:04 poorpool

@poorpool hi, thanks for your feedback and data. It can be reproduced. Will come back if there's a good solution.

RunningLeon avatar Apr 02 '25 11:04 RunningLeon

@poorpool Hi, sorry for the late reply. This subtle difference may be caused by the compute precision for the prefilling attn kernel with cache in lmdeploy. This may take longer time to debug and fix. For now, if you cannot accept this difference, we suggest that you disable prefix caching. Thanks for your understanding.

RunningLeon avatar Apr 03 '25 10:04 RunningLeon

Thanks for your reply, looking forward to LMDeploy getting better!

poorpool avatar Apr 07 '25 02:04 poorpool

@RunningLeon 关于这个问题,最近我观察到一些新的现象:昇腾910B也同样会出现问题(不论单卡多卡),且开启图模式以后推理不一致率更高。我拿昇腾910B tp=2一条一条推理200个prompt(每个prompt推理两次以检查prefix cache后的输出一致性),eager模式200条中会有6条左右不一致,graph模式200会有24条左右不一致。

请问开启prefix cache后同样prompt输出结果不一致的问题在新版本中是否已经修复,或者有修复的计划呢,谢谢☺️

poorpool avatar May 07 '25 07:05 poorpool

@poorpool Sorry for the late reply. Could you try with https://github.com/InternLM/lmdeploy/pull/3494/files? It tested ok with your sample code on this pr.

Thank you! Sure, qwen_tests.zip This file includes my test scripts and outputs for your prompt (qwen_angelo.py for script and qwen_angelo.txt for output), and my long prompt (qwen_harry.py for script and qwen_harry.txt for output).

However, we can still reproduce the problem with these two test scripts in my environment (outputting A the first time, then B the second, third, and fourth times the prefix cache is turned on). Thank you for the quick reply^_^

RunningLeon avatar May 09 '25 12:05 RunningLeon

@RunningLeon Thank you for your this information. This PR is tested OK for qwen_angelo.py and qwen_harry.py using Qwen2.5-7B-Instruct or Internlm2.5-7b-chat in my device. :-D However, the qwen_harry.py still have the same problem (running the same prompt 4 times, but output A, B, B, B) when using Qwen2.5-0.5B-Instruct.

Image

Image

My environment:

(cyxlmdeploy) cyx@s30:~/lmdeploy-fixstop$ lmdeploy check_env
sys.platform: linux
Python: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb  6 2025, 18:56:27) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA A100-SXM4-40GB
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.6, V12.6.68
GCC: gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
PyTorch: 2.5.1+cu124
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.4
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 90.1
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.4, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.5.1, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,

TorchVision: 0.20.1+cu124
LMDeploy: 0.7.2.post1+fda6a6e
transformers: 4.49.0
gradio: Not Found
fastapi: 0.115.11
pydantic: 2.10.6
triton: 3.1.0
NVIDIA Topology:
        GPU0    NIC0    NIC1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     SYS     0-17,72-89      0               N/A
NIC0    SYS      X      PIX
NIC1    SYS     PIX      X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1

poorpool avatar May 09 '25 16:05 poorpool