swift icon indicating copy to clipboard operation
swift copied to clipboard

运行失败

Open SunLang115 opened this issue 2 months ago • 8 comments

(swift) [sunkaijie@localhost swift]$ CUDA_VISIBLE_DEVICES=0 swift infer --model_type llava1d6-mistral-7b-instruct run sh: python /home/sunkaijie/project/swift/swift/cli/infer.py --model_type llava1d6-mistral-7b-instruct 2024-04-22 08:11:33,587 - modelscope - INFO - PyTorch version 2.2.2 Found. 2024-04-22 08:11:33,588 - modelscope - INFO - Loading ast index from /home/sunkaijie/.cache/modelscope/ast_indexer 2024-04-22 08:11:33,621 - modelscope - INFO - Loading done! Current index file version is 1.13.3, with md5 2911bfe27de51af351c02ec5ebb936e7 and a total number of 972 components indexed [INFO:swift] Start time of running main: 2024-04-22 08:11:33.921263 [INFO:swift] ckpt_dir: None [INFO:swift] Due to ckpt_dir being None, load_args_from_ckpt_dir is set to False. [INFO:swift] Setting template_type: llava-mistral-instruct [INFO:swift] Setting self.eval_human: True [INFO:swift] Setting overwrite_generation_config: False [INFO:swift] args: InferArguments(model_type='llava1d6-mistral-7b-instruct', model_id_or_path='AI-ModelScope/llava-v1.6-mistral-7b', model_revision='master', sft_type='full', template_type='llava-mistral-instruct', infer_backend='pt', ckpt_dir=None, load_args_from_ckpt_dir=False, load_dataset_config=False, eval_human=True, seed=42, dtype='fp16', dataset=[], dataset_seed=42, dataset_test_ratio=0.01, val_dataset_sample=10, save_result=True, system=None, max_length=None, truncation_strategy='delete', check_dataset_strategy='none', custom_train_dataset_path=[], custom_val_dataset_path=[], quantization_bit=0, bnb_4bit_comp_dtype='fp16', bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=True, max_new_tokens=2048, do_sample=True, temperature=0.3, top_k=20, top_p=0.7, repetition_penalty=1.0, num_beams=1, stop_words=None, use_flash_attn=None, ignore_args_error=False, stream=True, merge_lora=False, merge_device_map='cpu', save_safetensors=True, overwrite_generation_config=False, verbose=None, gpu_memory_utilization=0.9, tensor_parallel_size=1, max_model_len=None, vllm_enable_lora=False, vllm_max_lora_rank=16, vllm_lora_modules=[], show_dataset_sample=10, safe_serialization=None, model_cache_dir=None, merge_lora_and_save=None) [INFO:swift] Global seed set to 42 [INFO:swift] device_count: 1 [INFO:swift] Downloading the model from ModelScope Hub, model_id: AI-ModelScope/llava-v1.6-mistral-7b Downloading: 100%|████████████████████████████████████████████████████████████████| 1.59k/1.59k [00:00<00:00, 8.77MB/s] Downloading: 100%|███████████████████████████████████████████████████████████████████| 48.0/48.0 [00:00<00:00, 356kB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████| 111/111 [00:00<00:00, 954kB/s] Downloading: 92%|██████████████████████████████████████████████████████████▋ | 4.22G/4.60G [02:4Downloading: 93%|███████████████████████████████████████████████████████████▋ | 4.29G/4.60G [02:4Downloading: 97%|█████████████████████████████████████████████████████████████▊ | 4.45G/4.60G [02:4Downloading: 100%|███████████████████████████████████████████████████████████████▉| 4.60G/4.60G [02:4Downloading: 100%|███████████████████████████████████████████████████████████████▉| 4.60G/4.60G [02:49<00:00, 29.1MB/s] Downloading: 100%|█████████████████████████████████████████████▉| 4.66G/4.66G [02:49<00:00, 29.5MB/s] Downloading: 65%|█████████████████████████████▊ Downloading: 100%|█████████████████████████████████████████████▉| 4.59G/4.59G [02:34<00:00, 31.9MB/s] Downloading: 100%|██████████████████████████████████████████| 250M/250M [00:28<00:00, 9.09MB/s] Downloading: 100%|████████████████████████████████████████| 71.5k/71.5k [00:00<00:00, 1.16MB/s] Downloading: 100%|████████████████████████████████████████| 1.46k/1.46k [00:00<00:00, 10.2MB/s] Downloading: 100%|████████████████████████████████████████████| 438/438 [00:00<00:00, 1.47MB/s] Downloading: 100%|████████████████████████████████████████| 1.71M/1.71M [00:00<00:00, 7.36MB/s] Downloading: 100%|██████████████████████████████████████████| 482k/482k [00:00<00:00, 2.92MB/s] Downloading: 100%|████████████████████████████████████████| 1.43k/1.43k [00:00<00:00, 6.43MB/s] Downloading: 100%|██████████████████████████████████████████| 702k/702k [00:00<00:00, 4.55MB/s] Downloading: 100%|████████████████████████████████████████| 6.87k/6.87k [00:00<00:00, 40.8MB/s] [INFO:swift] Run the command: git -C /home/sunkaijie/.cache/modelscope/hub/_github clone https://github.com/haotian-liu/LLaVA.git LLaVA.git Unknown option: -C usage: git [--version] [--help] [-c name=value] [--exec-path[=]] [--html-path] [--man-path] [--info-path] [-p|--paginate|--no-pager] [--no-replace-objects] [--bare] [--git-dir=] [--work-tree=] [--namespace=] [] Traceback (most recent call last): File "/home/sunkaijie/project/swift/swift/cli/infer.py", line 5, in infer_main() File "/home/sunkaijie/project/swift/swift/utils/run_utils.py", line 31, in x_main result = llm_x(args, **kwargs) File "/home/sunkaijie/project/swift/swift/llm/infer.py", line 264, in llm_infer model, template = prepare_model_template(args) File "/home/sunkaijie/project/swift/swift/llm/infer.py", line 188, in prepare_model_template model, tokenizer = get_model_tokenizer( File "/home/sunkaijie/project/swift/swift/llm/utils/model.py", line 3625, in get_model_tokenizer model, tokenizer = get_function(model_dir, torch_dtype, model_kwargs, File "/home/sunkaijie/project/swift/swift/llm/utils/model.py", line 3376, in get_model_tokenizer_llava from llava.model import LlavaMistralForCausalLM, LlavaMistralConfig ModuleNotFoundError: No module named 'llava' 运行代码出现这个错误,按照llava最佳实践.md操作所出现的

SunLang115 avatar Apr 22 '24 12:04 SunLang115