VITA
VITA copied to clipboard
有没有详细一点的web_demo推理部署流程?
有没有详细一点的web_demo推理部署流程?按readme去做,接连报错。 1、先是报 Repo is must be in the form ‘repo_name’ or namespace/repo_name......,排查是readme中“mv demo_VITA_ckpt/config.json demo_VITA_ckpt/origin_config.json”这句话把config.json去掉了,把config.json补上,问题解决。 2、然后遇到 limit_mm_per_prompt is only supported for multimodal models. 把limit_mm_per_prompt={‘image’:256, 'audio':50}注释掉,问题解决。 3、又遇到vllm中['VITAQwen2ForCausalLM'] are not supported for now的问题,至此怀疑自己的操作步骤有问题。 有没有大佬可以解答一下
All the issues you encountered are due to not executing the following operations:
# Backup a new weight file
cp -rL VITA_ckpt/ demo_VITA_ckpt/
mv demo_VITA_ckpt/config.json demo_VITA_ckpt/origin_config.json
cd ./web_demo/vllm_tools
cp -rf qwen2p5_model_weight_file/* ../../demo_VITA_ckpt/
cp -rf vllm_file/* your_anaconda/envs/vita_demo/lib/python3.10/site-packages/vllm/model_executor/models/
Our readme about Demo have the following structure:
Demo
readme0
📍 Basic Demo
readme1
📍 Real-Time Interactive Demo
readme2
Readme0 must be executed first before readme1 or readme2 is to be executed afterward.
The instructions here may not be clear, and we will revise them later.
@LayKwokMing 我遇到了与你想通过的问题,就是limit_mm_per_prompt is only supported for multimodal models
反复重试后,发现是自己配置错了:
执行:cp -rf vllm_file/* your_anaconda/envs/vita_demo/lib/python3.10/site-packages/vllm/model_executor/models/代码时,只复制了your_anaconda/envs/vita_demo/lib/python3.10/site-packages/vllm/model_executor/,导致后期所有环节都错误。
I encountered the same problem as you. That is, "limit_mm_per_prompt is only supported for multimodal models". After repeated retries, I found that I had configured it incorrectly. When executing the code "cp -rf vllm_file/* your_anaconda/envs/vita_demo/lib/python3.10/site-packages/vllm/model_executor/models/", I only copied "your_anaconda/envs/vita_demo/lib/python3.10/site-packages/vllm/model_executor/", which led to errors in all subsequent steps.
你们的demo的使用说明,不能用垃圾来形容,简直无可救药!!!
有没有详细一点的web_demo推理部署流程?按readme去做,接连报错。 1、先是报 Repo is must be in the form ‘repo_name’ or namespace/repo_name......,排查是readme中“mv demo_VITA_ckpt/config.json demo_VITA_ckpt/origin_config.json”这句话把config.json去掉了,把config.json补上,问题解决。 2、然后遇到 limit_mm_per_prompt is only supported for multimodal models. 把limit_mm_per_prompt={‘image’:256, 'audio':50}注释掉,问题解决。 3、又遇到vllm中['VITAQwen2ForCausalLM'] are not supported for now的问题,至此怀疑自己的操作步骤有问题。 有没有大佬可以解答一下
和你遇到一模一样的问题了,请问你现在解决了嘛,多次研究issue我感觉类似的问题用的demo_vita_ckpt的内容都是VITA1.0的,但是VITA1.5会遇到问题,复制操作也完全按照readme
有没有详细一点的web_demo推理部署流程?按readme去做,接连报错。 1、先是报 Repo is must be in the form ‘repo_name’ or namespace/repo_name......,排查是readme中“mv demo_VITA_ckpt/config.json demo_VITA_ckpt/origin_config.json”这句话把config.json去掉了,把config.json补上,问题解决。 2、然后遇到 limit_mm_per_prompt is only supported for multimodal models. 把limit_mm_per_prompt={‘image’:256, 'audio':50}注释掉,问题解决。 3、又遇到vllm中['VITAQwen2ForCausalLM'] are not supported for now的问题,至此怀疑自己的操作步骤有问题。 有没有大佬可以解答一下
和你遇到一模一样的问题了,请问你现在解决了嘛,多次研究issue我感觉类似的问题用的demo_vita_ckpt的内容都是VITA1.0的,但是VITA1.5会遇到问题,复制操作也完全按照readme
你好,通过 readme 里的这行代码cp -rf qwen2p5_model_weight_file/* ../../demo_VITA_ckpt/,qwen2p5_model_weight_file 文件夹下面的新的 config.json 将会被复制到 demo_VITA_ckpt 中。你的所有问题都源于这里,不知道为何你没有把 github 上的 config.json 复制过去,而是仍使用了 huggingface 上模型的 config.json。
非常希望能够帮助到出问题的朋友们。