tianguang
tianguang
Hello, thanks for sharing the wonderful code! Could you please tell me how to configure the parameters of the train.py file to obtain the effect of the published model, or...
1、Why in 224_v2_llama2_video_stage_3.yaml the config is llama_model: "meta-llama/Meta-Llama-3-8B-Instruct".While in stage_2, it is "meta-llama/Llama-2-7b-chat-hf". 2、Why the task is "image_text_pretrain" in stead of "fituning " in stage 3. Thanks.
When the steps run tosave_steps, the above error will be reported
如图是conductor.py的调试结果,预测的专家总是存在***,导致每次只能随机选择。