MiniGPT-4
MiniGPT-4 copied to clipboard
The memory requirement for finetuning
Hi guys:
I am trying to re-produce the finetune (stage 2) based on the instruction, such the following command: " torchrun --nproc-per-node NUM_GPU train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml "
ALTHOUGHT I could run the demo.py without OOM, run the finetune with official dataset result in OOM in my machine (colab). Just wondering anyone has relative experience.
Thanks!
The same question.
Maybe this is helpful #164