[Bug] failed to serve Deepseek R1 via SGLang, encounter ``sqlite3.OperationalError: no such column: "size" - should this be a string literal in single-quotes?`` error
Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
Describe the bug
Log (also, these's a typo: process , instead of proces):
[2025-02-20 10:15:56 TP1] Scheduler hit an exception: Traceback (most recent call last):
File "/root/mambaforge/envs/deepseek/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 1816, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
File "/root/mambaforge/envs/deepseek/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 345, in __init__
self.grammar_backend = create_grammar_backend(
File "/root/mambaforge/envs/deepseek/lib/python3.10/site-packages/sglang/srt/constrained/base_grammar_backend.py", line 78, in create_grammar_backend
from sglang.srt.constrained.outlines_backend import OutlinesGrammarBackend
File "/root/mambaforge/envs/deepseek/lib/python3.10/site-packages/sglang/srt/constrained/outlines_backend.py", line 30, in <module>
from sglang.srt.constrained.outlines_jump_forward import OutlinesJumpForwardMap
File "/root/mambaforge/envs/deepseek/lib/python3.10/site-packages/sglang/srt/constrained/outlines_jump_forward.py", line 49, in <module>
def init_state_to_jump_forward(regex_string):
File "/root/mambaforge/envs/deepseek/lib/python3.10/site-packages/outlines/caching.py", line 93, in decorator
memory = get_cache()
File "/root/mambaforge/envs/deepseek/lib/python3.10/site-packages/outlines/caching.py", line 65, in get_cache
memory["__version__"] = outlines_version
File "/root/mambaforge/envs/deepseek/lib/python3.10/site-packages/diskcache/core.py", line 823, in __setitem__
self.set(key, value, retry=True)
File "/root/mambaforge/envs/deepseek/lib/python3.10/site-packages/diskcache/core.py", line 808, in set
self._row_insert(db_key, raw, now, columns)
File "/root/mambaforge/envs/deepseek/lib/python3.10/site-packages/diskcache/core.py", line 857, in _row_insert
sql(
sqlite3.OperationalError: no such column: "size" - should this be a string literal in single-quotes?
[2025-02-20 10:15:56] Received sigquit from a child proces. It usually means the child failed.
Reproduction
python3 -m sglang.launch_server --model /root/.cache/huggingface/hub/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B --trust-remote-code --tp 2
Environment
INFO 02-20 10:20:39 __init__.py:190] Automatically detected platform cuda.
Python: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:16:10) [GCC 13.3.0]
CUDA available: True
GPU 0,1,2,3: NVIDIA A100-SXM4-80GB
GPU 0,1,2,3 Compute Capability: 8.0
CUDA_HOME: :/usr/local/cuda
PyTorch: 2.5.1+cu124
sglang: 0.4.3.post2
sgl_kernel: 0.0.3.post6
flashinfer: 0.2.1.post2+cu124torch2.5
triton: 3.1.0
transformers: 4.48.3
torchao: 0.8.0
numpy: 1.26.4
aiohttp: 3.11.12
fastapi: 0.115.8
hf_transfer: 0.1.9
huggingface_hub: 0.28.1
interegular: 0.3.3
modelscope: 1.23.0
orjson: 3.10.15
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.2.1
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.63.2
tiktoken: 0.9.0
anthropic: 0.45.2
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV12 NV12 NV12 NV12 NV12 NV12 NV12 SYS 0-63 0 N/A
GPU1 NV12 X NV12 NV12 NV12 NV12 NV12 NV12 SYS 0-63 0 N/A
GPU2 NV12 NV12 X NV12 NV12 NV12 NV12 NV12 SYS 0-63 0 N/A
GPU3 NV12 NV12 NV12 X NV12 NV12 NV12 NV12 SYS 0-63 0 N/A
GPU4 NV12 NV12 NV12 NV12 X NV12 NV12 NV12 SYS 64-127 1 N/A
GPU5 NV12 NV12 NV12 NV12 NV12 X NV12 NV12 SYS 64-127 1 N/A
GPU6 NV12 NV12 NV12 NV12 NV12 NV12 X NV12 SYS 64-127 1 N/A
GPU7 NV12 NV12 NV12 NV12 NV12 NV12 NV12 X SYS 64-127 1 N/A
NIC0 SYS SYS SYS SYS SYS SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
ulimit soft: 1024
Thanks for reporting, we will check this issue.
I also meet this problem, in conda environment, but the problem disappears in venv environment. NVIDIA-DOCKER nvcr.io/nvidia/pytorch:24.11-py3
I resolved this error by running conda install libsqlite=3.48.0
Reference: https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/2669
I resolved this error by running
conda install libsqlite=3.48.0Reference: lllyasviel/stable-diffusion-webui-forge#2669
@Yonggie @FenghaoZhu Can this method solve your bugs?
I resolved this error by running
conda install libsqlite=3.48.0Reference: lllyasviel/stable-diffusion-webui-forge#2669@Yonggie @FenghaoZhu Can this method solve your bugs?
Works for me.