AlphaINF

Results 6 comments of AlphaINF

感觉这里是制造出复读机的重要问题点

same issue, I deployed a LLaMA structure LLLM but the vocab_size had been expanded to 130,000

I found a solution which inspired by Gemma you can find the file: vllm/vllm/model_executor/models/llama.py, then find the definition of LlamaForCausalLM (in line 278) that's the original support lora modules ```python...

这个问题我也出现了 代码最近更新支持了GLM2,然后老的就会出现全部-100的情况,试着用老代码(至少三天前的)

您好,我在前几天实现了tigerbot模型适配vllm,主要是适配了jinja脚本的前缀,您可以来看我这一篇博文:https://www.cnblogs.com/alphainf/p/17884055.html

@HwwwwwwwH 我也出现了在v0.7.1上运行vllm 跑不起来的问题,下面是我的环境和运行命令,提示AttributeError: 'MiniCPMOProcessor' object has no attribute 'get_audio_placeholder'的问题。 Your current environment INFO 02-06 15:40:56 init.py:186] Automatically detected platform cuda. Collecting environment information... PyTorch version: 2.5.1+cu124 Is debug build: False...