lmdeploy icon indicating copy to clipboard operation
lmdeploy copied to clipboard

[Bug] QwQ-32B error: argument tool-call-parser: not allowed with argument--reasoning-parser

Open jingyibo123 opened this issue 8 months ago • 5 comments

Checklist

  • [x] 1. I have searched related issues but cannot get the expected help.
  • [x] 2. The bug has not been fixed in the latest version.
  • [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

For QwQ-32B model, reasoning-parsing and tool-call-parsing should be able to work simultaneously.

See VLLM's latest documentation

Reproduction

lmdeploy serve api_server Qwen/QwQ-32B-AWQ --server-port 8000 --backend turbomind --model-name qwq-32B-32K --session-len 32768 --max-prefill-token-num 8192 --cac
he-max-entry-count 0.85 --tp 1 --quant-policy 8 --model-format awq --reasoning-parser qwen-qwq --tool-call-parser qwen --log-level INFO --max-log-len 200

Environment

sys.platform: linux
Python: 3.9.0 (default, Nov 15 2020, 14:28:56) [GCC 7.3.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0,1,2,3: Tesla V100-PCIE-32GB
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.1, V12.1.66
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.3) 9.4.0
PyTorch: 2.4.0+cu121
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.4.2 (Git Hash 1137e04ec0b5251ca2b4400a4fd3c667ce843d67)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.1
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 90.1  (built against CUDA 12.4)
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=12.1, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=pedantic -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=2.4.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,

TorchVision: 0.19.0+cu121
LMDeploy: 0.7.2.post1+
transformers: 4.48.3
gradio: Not Found
fastapi: 0.112.1
pydantic: 2.8.2
triton: 3.0.0
NVIDIA Topology:
        GPU0    GPU1    GPU2    GPU3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      PIX     PIX     PIX     0-17,36-53      0               N/A
GPU1    PIX      X      PIX     PIX     0-17,36-53      0               N/A
GPU2    PIX     PIX      X      PIX     0-17,36-53      0               N/A
GPU3    PIX     PIX     PIX      X      0-17,36-53      0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Error traceback

lmdeploy serve api_server: error: argument --tool-call-parser: not allowed with argument --reasoning-parser

jingyibo123 avatar Apr 14 '25 11:04 jingyibo123

Seems like the new qwen3 supports both tool-calling and reasoning as well. Kindly asking for any potential plan ?

jingyibo123 avatar May 12 '25 11:05 jingyibo123

@jingyibo123 hi, sorry for the late reply. Could you try to remove the restriction in cli and PR to us?

RunningLeon avatar May 13 '25 02:05 RunningLeon

@RunningLeon I added the pull request, unfortunatedly I cannot test the PR due to docker unavailability (to build form source with turbind, and I had countless bugs running with pytorch engine). I could test it with a nightly wheel, if you evaluate the changes to be non-significant.

jingyibo123 avatar May 14 '25 08:05 jingyibo123

@RunningLeon I added the pull request, unfortunatedly I cannot test the PR due to docker unavailability (to build form source with turbind, and I had countless bugs running with pytorch engine). I could test it with a nightly wheel, if you evaluate the changes to be non-significant.

Thanks for your contribution. The PR will be reviewed soon.

RunningLeon avatar May 14 '25 10:05 RunningLeon

@jingyibo123 Hi, you can use the docker image to test your pr:

# pull image
docker pull docker pull openmmlab/lmdeploy:latest

# create container
  docker run -it \
  --gpus=all \
  --ipc=host \
  --network host \
  -v /mnt:/mnt \
  openmmlab/lmdeploy:latest 

# fetch code in /opt/lmdeploy
git remote add test https://github.com/jingyibo123/lmdeploy.git
git fetch test parsers-mutually-inclusive:parsers-mutually-inclusive
git checkout parsers-mutually-inclusive

# start servers and test

RunningLeon avatar May 14 '25 10:05 RunningLeon

PR merged, thanks for all your guidance @RunningLeon @CUHKSZzxy

jingyibo123 avatar May 20 '25 13:05 jingyibo123