SII-Auraithm
SII-Auraithm
好的,感谢。我们已经添加了这个功能,并通过了我们的测试。 https://github.com/InternLM/lmdeploy/pull/4057
Using cached metadata: SDAR-30B-A3B-Chat/.hfd/repo_metadata.json Resume from file list: SDAR-30B-A3B-Chat/.hfd/aria2c_urls.txt Starting download with aria2c to SDAR-30B-A3B-Chat... No files to download. Download completed successfully. Repo directory: ../tools/hfd/SDAR-30B-A3B-Chat ,看样子是完整的?
Any new developments? We will soon release the relevant training framework, which requires the use of this branch
That means v0.12.0 is not supported yet? , but I saw that it merged this branch
So, if I need to perform inference with Qwen3-Omni's `use_audio_in_video` feature, how exactly should I proceed? Should I download `vllm==0.11.0`, modify its files, then install `vllm-omni`, and finally use `vllm-omni`...
I pip install the vllm==0.11.0, and modify the processing.py, qwen3_omni_moe_thinker.py, but face the error ``` File "/usr/local/lib/python3.12/dist-packages/vllm/multimodal/processing.py", line 28, in from vllm.utils.collection_utils import flatten_2d_lists, full_groupby ModuleNotFoundError: No module named 'vllm.utils.collection_utils'...
Which specific files need to be modified? I'm a bit confused about this merge, and my attempts to make the changes have failed.