Results 37 comments of lxysl

Please check if the package versions are correct, ensure all instructions in readme are correctly excuted. Here are some common issues others have faced that you can refer to: #56...

This issue seems to be unclear as to the cause, I recommend you check that the versions of all packages are correct, and ensure that the files in vllm_tools are...

All the issues you encountered are due to not executing the following operations: ```bash # Backup a new weight file cp -rL VITA_ckpt/ demo_VITA_ckpt/ mv demo_VITA_ckpt/config.json demo_VITA_ckpt/origin_config.json cd ./web_demo/vllm_tools cp...

> > 有没有详细一点的web_demo推理部署流程?按readme去做,接连报错。 1、先是报 Repo is must be in the form ‘repo_name’ or namespace/repo_name......,排查是readme中“mv demo_VITA_ckpt/config.json demo_VITA_ckpt/origin_config.json”这句话把config.json去掉了,把config.json补上,问题解决。 2、然后遇到 limit_mm_per_prompt is only supported for multimodal models. 把limit_mm_per_prompt={‘image’:256, 'audio':50}注释掉,问题解决。 3、又遇到vllm中['VITAQwen2ForCausalLM'] are not supported...

The root cause of your issue is: ``` [rank0]: File "/usr/local/lib/python3.10/site-packages/vllm/model_executor/models/qwen2.py", line 959, in _validate_shape [rank0]: raise ValueError( [rank0]: ValueError: The expected shape of pixel values per image per batch...

https://github.com/XinzeZhang/HUST-PhD-Thesis-Latex/pull/33#issue-2881563132 hello, I have created a pr to support this feature. @wangshengseee @pikachubz @XinzeZhang

> > 改了cls和cover.tex之后编译会出错 请确保 cls 中所有改动均已修改。参见:https://github.com/XinzeZhang/HUST-PhD-Thesis-Latex/pull/33/files

已提交最新代码以解决在两张卡上部署的问题,关键修改是在子进程启动后再加载 torch 相关包

修改 cuda_devices https://github.com/VITA-MLLM/VITA/blob/6a26b5cbe1472e9854072d4add674108ae5c6504/web_demo/server.py#L992 https://github.com/VITA-MLLM/VITA/blob/6a26b5cbe1472e9854072d4add674108ae5c6504/web_demo/server.py#L1013

能否添加一个全局唤醒的快捷键,例如 chatgpt mac 端可以用 option+space 唤醒出对话框。 ![image](https://github.com/user-attachments/assets/d7b9d40c-991a-4fd0-a582-ab8c69d462ef)