CogVideo icon indicating copy to clipboard operation
CogVideo copied to clipboard

How to use finetuned model

Open zyssyz123 opened this issue 1 year ago • 1 comments

System Info / 系統信息

都是正常的

Information / 问题信息

  • [ ] The official example scripts / 官方的示例脚本
  • [ ] My own modified scripts / 我自己修改的脚本和任务

Reproduction / 复现过程

都是正常的

Expected behavior / 期待表现

我使用sat成功finetune并获得了一个 pt文件,但我如何使用这个pt文件呢。图片中的文档部分介绍的不是很清晰,我有什么参数可以指定这个新pt的路径来使用它吗? 447167f39a2a6d0aaa555f02072f4d5

zyssyz123 avatar Sep 05 '24 01:09 zyssyz123

To use the fine-tuned model, you need to modify the load section in the CogVideo/sat/configs/inference.yaml file. The --base should be the same file used for fine-tuning.

run_cmd="$environs python sample_video.py --base /configs/cogvideox_<what_you_want>_lora.yaml /configs/inference.yaml --seed 1024"

example

run_cmd="$environs python sample_video.py --base /CogVideo/sat/configs/cogvideox_2b_lora.yaml /CogVideo/sat/configs/inference.yaml

inference.yaml

args:
  latent_channels: 16
  mode: inference
  load: "{your CogVideoX SAT folder}/transformer" # This is for Full model without lora adapter
  # load: "{your lora folder} such as zRzRzRzRzRzRzR/lora-disney-08-20-13-28" # This is for Full model without lora adapter

  batch_size: 1
  input_type: txt
  input_file: configs/test.txt
  sampling_num_frames: 13  # Must be 13, 11 or 9
  sampling_fps: 8
  fp16: True # For CogVideoX-2B
#  bf16: True # For CogVideoX-5B
  output_dir: outputs/
  force_inference: True

# load: "{your lora folder} such as zRzRzRzRzRzRzR/lora-disney-08-20-13-28" # This is for Full model without lora adapter

KihongK avatar Sep 10 '24 09:09 KihongK