CLeaf_873981784
CLeaf_873981784
What are the specific parameter Settings
is there any pre-trained model for fine-tuning? if i want to train on my own dataset,what should i do?which file should i start from? thank u for your amazing work!
 follow this step, i want to try toy dataset, but i got error: launch.py: error: argument --node_rank: invalid int value: '' 
what is this method?
从issue中已经了解了硬件需求   还想了解一下finetune全部LLM和lora分别耗时多久?
local chat code: `import torch from PIL import Image from transformers import AutoModel, CLIPImageProcessor from transformers import AutoTokenizer path = "OpenGVLab/InternVL-Chat-Chinese-V1-2" model = AutoModel.from_pretrained( path, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True, device_map='auto').eval() tokenizer...

internvl_chat/shell/internlm2_20b_dynamic/internvl_chat_v1_5_internlm2_20b_dynamic_res_finetune.sh how many gpus are need for finetuning? I noticed that for 1-2 version: Note: fine-tune the full LLM needs 16 A100 80G GPUs, and fine-tune the LoRA needs 2...