MiniCPM-V icon indicating copy to clipboard operation
MiniCPM-V copied to clipboard

Help with Finetune

Open EmailScraper opened this issue 1 year ago • 5 comments

Hi, team. I fine tuned with 15 <image, Q&A> pairs of one document type. (12 pairs as train, 3 as test) After 500 epoches, both of training and eval_loss are nearly zero like following.

{'loss': 0.0, 'grad_norm': 7.342097887885757e-06, 'learning_rate': 1e-06, 'epoch': 500.0}
{'eval_loss': 7.8905128475526e-07, 'eval_runtime': 2.179, 'eval_samples_per_second': 1.377, 'eval_steps_per_second': 0.459, 'epoch': 500.0}  

But even I pick up image and ask same questions from training set, still get different result from training set like following

  • Training Data
{
      "role": "user",
      "content": "What is the address breakdown city?"
    },
    {
      "role": "assistant",
      "content": "CALWELL"
    },
    {
      "role": "user",
      "content": "What is the address breakdown street?"
    },
    {
      "role": "assistant",
      "content": "65 MARTLEY CCT"
    },
    {
      "role": "user",
      "content": "What is the address?"
    },
    {
      "role": "assistant",
      "content": "65 MARTLEY CCT, CALWELL, ACT, 2905"
    },
  • Model Output
<User>: What is the address breakdown city?
<Assistant>: 65 MARTLEY CCT
<User>: What is the address breakdown street?
<Assistant>: 65 MARTLEY CCT, suburb CALWELL ACT 2905.
<User>: What is the address?
<Assistant>: 65 MARTLEY CCT CALWELL ACT 2905

I used batch_size 8, and model_max_length 1024.

I loaded model like following.

model_path = '/home/paperspace/.../OpenBmb/MiniCPM-V/finetune/output/output_minicpmv2_lora/checkpoint-1000'
model = AutoPeftModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True).to(dtype=torch.float16)
model = model.to(device=device)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model.eval()

Tensorboard image

Any help would be appreciated to get correct finetuning result.

EmailScraper avatar Jun 07 '24 10:06 EmailScraper

插眼

yoookoso avatar Jun 07 '24 13:06 yoookoso

Hi, team. I fine tuned with 15 <image, Q&A> pairs of one document type. (12 pairs as train, 3 as test) After 500 epoches, both of training and eval_loss are nearly zero like following.

{'loss': 0.0, 'grad_norm': 7.342097887885757e-06, 'learning_rate': 1e-06, 'epoch': 500.0}
{'eval_loss': 7.8905128475526e-07, 'eval_runtime': 2.179, 'eval_samples_per_second': 1.377, 'eval_steps_per_second': 0.459, 'epoch': 500.0}  

But even I pick up image and ask same questions from training set, still get different result from training set like following

  • Training Data
{
      "role": "user",
      "content": "What is the address breakdown city?"
    },
    {
      "role": "assistant",
      "content": "CALWELL"
    },
    {
      "role": "user",
      "content": "What is the address breakdown street?"
    },
    {
      "role": "assistant",
      "content": "65 MARTLEY CCT"
    },
    {
      "role": "user",
      "content": "What is the address?"
    },
    {
      "role": "assistant",
      "content": "65 MARTLEY CCT, CALWELL, ACT, 2905"
    },
  • Model Output
<User>: What is the address breakdown city?
<Assistant>: 65 MARTLEY CCT
<User>: What is the address breakdown street?
<Assistant>: 65 MARTLEY CCT, suburb CALWELL ACT 2905.
<User>: What is the address?
<Assistant>: 65 MARTLEY CCT CALWELL ACT 2905

I used batch_size 8, and model_max_length 1024.

I loaded model like following.

model_path = '/home/paperspace/.../OpenBmb/MiniCPM-V/finetune/output/output_minicpmv2_lora/checkpoint-1000'
model = AutoPeftModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True).to(dtype=torch.float16)
model = model.to(device=device)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model.eval()

Tensorboard image

Any help would be appreciated to get correct finetuning result.

Can you give me the code ?

WAILMAGHRANE avatar Jun 07 '24 19:06 WAILMAGHRANE

Hi, team. I fine tuned with 15 <image, Q&A> pairs of one document type. (12 pairs as train, 3 as test) After 500 epoches, both of training and eval_loss are nearly zero like following.

{'loss': 0.0, 'grad_norm': 7.342097887885757e-06, 'learning_rate': 1e-06, 'epoch': 500.0}
{'eval_loss': 7.8905128475526e-07, 'eval_runtime': 2.179, 'eval_samples_per_second': 1.377, 'eval_steps_per_second': 0.459, 'epoch': 500.0}  

But even I pick up image and ask same questions from training set, still get different result from training set like following

  • Training Data
{
      "role": "user",
      "content": "What is the address breakdown city?"
    },
    {
      "role": "assistant",
      "content": "CALWELL"
    },
    {
      "role": "user",
      "content": "What is the address breakdown street?"
    },
    {
      "role": "assistant",
      "content": "65 MARTLEY CCT"
    },
    {
      "role": "user",
      "content": "What is the address?"
    },
    {
      "role": "assistant",
      "content": "65 MARTLEY CCT, CALWELL, ACT, 2905"
    },
  • Model Output
<User>: What is the address breakdown city?
<Assistant>: 65 MARTLEY CCT
<User>: What is the address breakdown street?
<Assistant>: 65 MARTLEY CCT, suburb CALWELL ACT 2905.
<User>: What is the address?
<Assistant>: 65 MARTLEY CCT CALWELL ACT 2905

I used batch_size 8, and model_max_length 1024.

I loaded model like following.

model_path = '/home/paperspace/.../OpenBmb/MiniCPM-V/finetune/output/output_minicpmv2_lora/checkpoint-1000'
model = AutoPeftModelForCausalLM.from_pretrained(model_path, device_map="auto", trust_remote_code=True).to(dtype=torch.float16)
model = model.to(device=device)
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model.eval()

Tensorboard image

Any help would be appreciated to get correct finetuning result.

@EmailScraper Hello, may I ask if you have trained with images? When I fine tuned, it kept showing loading the data. Have you encountered this? May I inquire with you? Thank you image

JinQiangWang2021 avatar Jun 09 '24 04:06 JinQiangWang2021

我认为有可能是你的训练数据集太小了,而且训练epoch设置为500,又太多了导致模型严重过拟合

LDLINGLINGLING avatar Jun 17 '24 08:06 LDLINGLINGLING

Hi, @LDLINGLINGLING , thanks for your reply. Btw, I just pick up image and ask same questions from training set, still get different result from training set. Is it possible?

EmailScraper avatar Jun 18 '24 01:06 EmailScraper