CLeaf_873981784

Results 14 issues of CLeaf_873981784

What are the specific parameter Settings

is there any pre-trained model for fine-tuning? if i want to train on my own dataset,what should i do?which file should i start from? thank u for your amazing work!

![图片](https://user-images.githubusercontent.com/31176427/232438462-dd327141-a02b-4405-9ec7-da675c9fe43f.png) follow this step, i want to try toy dataset, but i got error: launch.py: error: argument --node_rank: invalid int value: '' ![图片](https://user-images.githubusercontent.com/31176427/232438523-f338f5e0-3f0b-4304-b2b8-f07ab28db4f1.png)

从issue中已经了解了硬件需求 ![image](https://github.com/OpenGVLab/InternVL/assets/31176427/59ea78ef-0e2b-4b83-87da-788b7382f186) ![image](https://github.com/OpenGVLab/InternVL/assets/31176427/8f009c79-3d0f-4204-9b07-c1d7e15ffdc9) 还想了解一下finetune全部LLM和lora分别耗时多久?

local chat code: `import torch from PIL import Image from transformers import AutoModel, CLIPImageProcessor from transformers import AutoTokenizer path = "OpenGVLab/InternVL-Chat-Chinese-V1-2" model = AutoModel.from_pretrained( path, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, trust_remote_code=True, device_map='auto').eval() tokenizer...

![图片](https://user-images.githubusercontent.com/31176427/199943192-15efb29e-f6b1-4e91-85db-f8b78a5249c5.png)

internvl_chat/shell/internlm2_20b_dynamic/internvl_chat_v1_5_internlm2_20b_dynamic_res_finetune.sh how many gpus are need for finetuning? I noticed that for 1-2 version: Note: fine-tune the full LLM needs 16 A100 80G GPUs, and fine-tune the LoRA needs 2...