internvl3.5 lora finetune code, would you release ?
📚 The doc issue
is this be avaiable ?
Suggest a potential alternative/fix
No response
I have the same question.
Same question
请问您有找到全参微调脚本是哪个吗
请问您有找到全参微调脚本是哪个吗
https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat_gpt_oss/shell/internvl3_5_qwen3
sft
请问您有找到全参微调脚本是哪个吗
https://github.com/OpenGVLab/InternVL/tree/main/internvl_chat_gpt_oss/shell/internvl3_5_qwen3
sft
谢谢
Hello! Guys, I made a clean, reproducible notebook for fine-tuning InternVL-3.5 on MVBench using multiple PEFT strategies (LoRA, QLoRA, and full-tuning). You can easily use it for fine-tuning on your tasks. Link: https://github.com/Arseny5/InternVL-3.5-QLoRA-Fine-tune
[rank0]: AttributeError: 'InternVLChatModel' object has no attribute 'wrap_backbone_lora'
Hi lads, I also meet the problem:AttributeError: 'InternVLChatModel' object has no attribute 'wrap_backbone_lora'
Here is what I did(helped by Gemini):
- in file '''internvl_chat_finetune.py''', add '''from peft import LoraConfig, get_peft_model'''
- replace: '''
if model_args.use_backbone_lora:
model.wrap_backbone_lora(r=model_args.use_backbone_lora, lora_alpha=2 * model_args.use_backbone_lora)
model.config.use_backbone_lora = model_args.use_backbone_lora
if model_args.use_llm_lora:
model.wrap_llm_lora(r=model_args.use_llm_lora, lora_alpha=2 * model_args.use_llm_lora)
model.config.use_llm_lora = model_args.use_llm_lora
''' to: ''' def manual_wrap_lora(model, args): """ Manually wraps the vision backbone and/or LLM with LoRA if the model class is missing the built-in methods. """ from peft import LoraConfig, get_peft_model
# 1. Wrap Vision Backbone
if args.use_backbone_lora > 0:
print(f"Applying LoRA to Vision Backbone (r={args.use_backbone_lora})...")
# Standard target modules for InternViT / ViT based backbones
vision_target_modules = ['attn.qkv', 'attn.proj', 'mlp.fc1', 'mlp.fc2']
vision_config = LoraConfig(
r=args.use_backbone_lora,
lora_alpha=2 * args.use_backbone_lora,
target_modules=vision_target_modules,
lora_dropout=0.05,
bias='none',
)
# Apply PEFT specifically to the vision_model submodule
model.vision_model = get_peft_model(model.vision_model, vision_config)
model.vision_model.print_trainable_parameters()
model.config.use_backbone_lora = args.use_backbone_lora
# 2. Wrap LLM
if args.use_llm_lora > 0:
print(f"Applying LoRA to LLM (r={args.use_llm_lora})...")
# Standard target modules for Llama/Qwen/InternLM style models
llm_target_modules = ['q_proj', 'k_proj', 'v_proj', 'o_proj',
'gate_proj', 'up_proj', 'down_proj']
llm_config = LoraConfig(
r=args.use_llm_lora,
lora_alpha=2 * args.use_llm_lora,
target_modules=llm_target_modules,
lora_dropout=0.05,
bias='none',
task_type="CAUSAL_LM"
)
# Apply PEFT specifically to the language_model submodule
model.language_model = get_peft_model(model.language_model, llm_config)
model.language_model.print_trainable_parameters()
model.config.use_llm_lora = args.use_llm_lora
return model
if hasattr(model, 'wrap_backbone_lora'): # If the model class supports it natively if model_args.use_backbone_lora: model.wrap_backbone_lora(r=model_args.use_backbone_lora, lora_alpha=2 * model_args.use_backbone_lora) model.config.use_backbone_lora = model_args.use_backbone_lora if model_args.use_llm_lora: model.wrap_llm_lora(r=model_args.use_llm_lora, lora_alpha=2 * model_args.use_llm_lora) model.config.use_llm_lora = model_args.use_llm_lora else: # Fallback to manual wrapper for InternVL 3.5 / Mismatched versions manual_wrap_lora(model, model_args) ''' 3. In '''internvl/model/internvl_chat/modeling_internvl_chat.py''' file , line 152 add this patch before for layer in self.language_model.model.layers:
''' llm_part = self.language_model.model if hasattr(llm_part, 'layers'): layers = llm_part.layers elif hasattr(llm_part, 'model') and hasattr(llm_part.model, 'layers'): layers = llm_part.model.layers else: raise AttributeError(f"Could not find layers in {type(llm_part)}")
for layer in layers: '''