Few-Shot-Patch-Based-Training
Few-Shot-Patch-Based-Training copied to clipboard
Training time issue
Hi, I am trying to train the model with the given 'Zuzka2_train' dataset on V100 1 GPU. I just used same command: "python train.py --config "_config/reference_P.yaml" --data_root "data/Zuzka2_train" --log_interval 1000 --log_folder logs_reference_P"
It spent one hour but it still running. I checked the number of epochs from the reference_P.yaml and it was 50,000,000. Do I need to keep this in my environment? Or do I need to use more V100 GPU?
+) How long does it take to train with the dataset?
I got 2 issues more..