Jiang Li
Jiang Li
hello I also encountered this problem, compressed after conversion, the model output garbled Bot: `cilechas ERR littTH avéessefopacitychasillessefilles Insidehardtées DamcilePortailitto ERR suivanteesteouléesaset ERRPortailcilexesunciVIDVIDпадаDelegèmesCommand av programmeekenèmesuncisef CivilhardtvierunciracссаDeleg suivante Civiléesées ERR Civilèmes...
> > Just to simplify, we are able compress and use the svdllm models. However, we are unable to convert them to Hugging Face formats like safetensors or GGUF. All...
@JeffreyWong20 Same here — I tested it and got perplexities of **14.84** on WikiText2 and **80.84** on C4 for LLaMA3. When using LLaMA-3, I initially encountered the following issue: >...
> Hi, [@dellixx](https://github.com/dellixx), may I ask how you run this code on LLaMA3? Cause I have upgraded the transformers version and modified the SVD_LlamaAttention class, but I obtained an extremely...
>  二维码失效了,麻烦您可以重新发一下吗
> I ran the following script. > > `lm_eval --model hf --model_args pretrained=openai/gpt-oss-20b --tasks hellaswag --device cuda --batch_size 64 --cache_requests true` > > However, I got the following result. >...