sglang
sglang copied to clipboard
Avoid failed import of aiter in `QuarkW4A4MXFP4` when using vLLM
Motivation
When I compiled and run sglang on AMD GPU, force to use the vLLM inference engine, it still failed due to not-installed aiter in the quantization:
$ SGLANG_USE_AITER=false python3 -m sglang.launch_server --model-path Qwen/Qwen3-4B-Instruct-2507 --trust-remote-code --host 0.0.0.0 --port 30000
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/inoki/Projects/Builds/sglang/python/sglang/launch_server.py", line 24, in <module>
server_args = prepare_server_args(sys.argv[1:])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/inoki/Projects/Builds/sglang/python/sglang/srt/server_args.py", line 4215, in prepare_server_args
return ServerArgs.from_cli_args(raw_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/inoki/Projects/Builds/sglang/python/sglang/srt/server_args.py", line 3813, in from_cli_args
return cls(**{attr: getattr(args, attr) for attr in attrs})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 287, in __init__
File "/home/inoki/Projects/Builds/sglang/python/sglang/srt/server_args.py", line 608, in __post_init__
self._handle_gpu_memory_settings(gpu_mem)
File "/home/inoki/Projects/Builds/sglang/python/sglang/srt/server_args.py", line 859, in _handle_gpu_memory_settings
model_config = self.get_model_config()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/inoki/Projects/Builds/sglang/python/sglang/srt/server_args.py", line 3834, in get_model_config
from sglang.srt.configs.model_config import ModelConfig
File "/home/inoki/Projects/Builds/sglang/python/sglang/srt/configs/model_config.py", line 26, in <module>
from sglang.srt.layers.quantization import QUANTIZATION_METHODS
File "/home/inoki/Projects/Builds/sglang/python/sglang/srt/layers/quantization/__init__.py", line 38, in <module>
from sglang.srt.layers.quantization.quark.quark import QuarkConfig
File "/home/inoki/Projects/Builds/sglang/python/sglang/srt/layers/quantization/quark/quark.py", line 17, in <module>
from sglang.srt.layers.quantization.quark.schemes import (
File "/home/inoki/Projects/Builds/sglang/python/sglang/srt/layers/quantization/quark/schemes/__init__.py", line 4, in <module>
from .quark_w4a4_mxfp4 import QuarkW4A4MXFP4
File "/home/inoki/Projects/Builds/sglang/python/sglang/srt/layers/quantization/quark/schemes/quark_w4a4_mxfp4.py", line 15, in <module>
from aiter.ops.triton.gemm_afp4wfp4 import gemm_afp4wfp4
ModuleNotFoundError: No module named 'aiter'
Modifications
This PR avoids this failure, similar to python/sglang/srt/layers/attention/nsa_backend.py:
if _is_hip:
try:
from aiter import ( # noqa: F401
flash_attn_varlen_func,
mha_batch_prefill_func,
paged_attention_ragged,
)
from aiter.mla import mla_decode_fwd, mla_prefill_fwd # noqa: F401
except ImportError:
print(
"aiter is AMD specific kernel library. Please make sure aiter is installed on your AMD device."
)
but issuing just a warning.
Accuracy Tests
Non-applied
Benchmarking and Profiling
Non-applied
Checklist
- [x] Format your code according to the Format code with pre-commit.
- [ ] Add unit tests according to the Run and add unit tests.
- [ ] Update documentation according to Write documentations.
- [ ] Provide accuracy and speed benchmark results according to Test the accuracy and Benchmark the speed.
- [ ] Follow the SGLang code style guidance.
- [ ] Work with maintainers to merge your PR. See the PR Merge Process
Summary of Changes
Hello @Inokinoki, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a critical ModuleNotFoundError that occurred when sglang attempted to use QuarkW4A4MXFP4 quantization on AMD GPUs without the aiter library installed. By wrapping the aiter imports in a try-except block, the system can now gracefully handle the absence of aiter, issuing a warning instead of crashing, thus allowing the application to proceed without requiring aiter if its specific functionalities are not being utilized.
Highlights
- Error Handling: Implemented a
try-except ImportErrorblock aroundaiterimports inquark_w4a4_mxfp4.pyto prevent crashes when theaitermodule is not found on AMD devices. - User Feedback: When
aiterimport fails, a warning message is now issued to the user, guiding them to ensureaiteris installed if needed forQuarkW4A4MXFP4. - Compatibility: This change improves the robustness of
sglangwhen running on AMD GPUs with vLLM, especially in scenarios whereaitermight not be installed or explicitly disabled.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.