hzgdeerHo

Results 12 issues of hzgdeerHo

### Describe the issue Issue: Command: ``` PASTE THE COMMANDS HERE. ``` Log: ``` PASTE THE LOGS HERE. ``` from llava.model.builder import load_pretrained_model from llava.mm_utils import get_model_name_from_path from llava.eval.run_llava import...

why this model always fail to recognize this audio normally! # the script is "1.01 的十次方等于多少,1.02 的十次方等于多少.。。。1.09 的十次方等于多少",always lost some information as follows: ![f652bdfd5f2762af654db12b6788e4b](https://github.com/k2-fsa/sherpa-ncnn/assets/145191076/4bafbc2d-0863-4957-9424-8a34d74048ef)

### Reminder - [X] I have read the README and searched the existing issues. ### Reproduction RUN a server like this: ``` CUDA_VISIBLE_DEVICES=0 API_PORT=8090 python src/api_demo.py \ --model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \...

pending

### Reminder - [X] I have read the README and searched the existing issues. ### Reproduction CUDA_VISIBLE_DEVICES=0 llamafactory-cli train \ --stage orpo \ --do_train True \ --model_name_or_path /home/ubuntu/LLaMA-Factory/models/llama3-8b-instruct-pro \ --finetuning_type...

### Reminder - [X] I have read the README and searched the existing issues. ### Reproduction By Using the LLama-pro example script to finetune the 01-ai/Yi-1.5-9B-Chat model: Modified expand.sh: python...

After finetuned the llama-3-8B-instruct with the same configuration ,as the code from:https://github.com/hiyouga/LLaMA-Factory/tree/3df986c6793a51ec2cb5f31fd1808cd3a9883bc4/examples/extrasexamples/extras/llama_pro always leads to apparent loss of original ability? I only used the train datasets "Identity". Can you help?...

TheBloke/deepseek-coder-33B-instruct-GGUF deepseek-coder-33b-instruct.Q6_K.gguf when I use llama cpp python to load the model , the model generates endless "GGGG...." ,But It works normally when the input question less than about 1000-2000...

lm_eval --model gguf --tasks arc_challenge --num_fewshot 25 --model_args model=codellama,base_url=http://127.0.0.1:8090 --batch_size 16 --log_samples --outp ut_path ./hzg_llama3_arc_challenge_25shot_f16GGUF --show_config --cache_requests true --use_ca che ./hzg_llama3_arc_challenge_25shot_f16GGUF --verbosity DEBUG Checking cached requests: 100%|███████████████████████████| 4687/4687 [00:00

### What happened? #####CMD which Works Normally: CUDA_VISIBLE_DEVICES=0 ./llama-server -m /home/ubuntu/.cache/huggingface/hub/models--MaziyarPanahi--Meta-Llama-3.1-8B-Instruct-GGUF/snapshots/1f301d86d760b435a11a56de3863bc0121bfb98f/Meta-Llama-3.1-8B-Instruct.Q8_0.gguf --gpu-layers 33 -cb --ctx-size 16128 --flash-attn --batch-size 512 --chat-template llama3 --port 8866 --host 0.0.0.0 #####CMD which Works NOT Normally:...

bug-unconfirmed
high severity