Jingyao Li

Results 4 issues of Jingyao Li

After instruction tuning, I get a adapter_config.json and adapter_model.bin, how to fuse them with the base model please?

good first issue

我的代码: ``` def load_model(ckpt_dir): model_type = ModelType.deepseek_v2_lite_chat template_type = get_default_template_type(model_type) llm_engine = get_vllm_engine( model_type, model_id_or_path=ckpt_dir, tensor_parallel_size=torch.cuda.device_count(), max_model_len=16384, gpu_memory_utilization=0.95, cache_dir='.cache') llm_engine.generation_config.max_new_tokens = 8192 tokenizer = llm_engine.hf_tokenizer template = get_template(template_type, tokenizer) return...

Traceback (most recent call last): File "/dataset-vlm/jingyaoli/LLMInfer/InfLLM/benchmark/pred.py", line 327, in preds = get_pred( File "/dataset-vlm/jingyaoli/LLMInfer/InfLLM/benchmark/pred.py", line 260, in get_pred output = searcher.generate( File "/dataset-vlm/jingyaoli/LLMInfer/InfLLM/benchmark/inf_llm/utils/greedy_search.py", line 32, in generate result =...