vpt
vpt copied to clipboard
How can I tune the fgvc or vtab datasets as mentioned in Table. 1
Thanks for the great work. I am trying to rerun some of the work you mentioned in Table. 1. As I have already gone through the FGVC datasets separately, how can I tune FGVC dataset as a whole as in Table. 1? Same question to VTAB-1k as well.
Plus: I am here reproducing some of your work in command like: CUDA_VISIBLE_DEVICES=1 PORT=20000 python train.py --config-file /vpt/configs/prompt/cars.yaml MODEL.TRANSFER_TYPE "prompt" MODEL.PROMPT.DEEP "True" MODEL.PROMPT.NUM_TOKENS "10" MODEL.PROMPT.DROPOUT "0.0" (For FGVC stanford-Cars as an example)
Can you provide the config lines or did I miss something important?
Thanks.
BTW, it seems there is a typo in demo.ipynb tune*.py when you are tuning vtab-caltech101, is there a reason why the config-file is still cub.yaml? I am getting confused about that.
Wait, I think I partially understand, is the FGVC results the average accuracy across multiple-datasets? (each dataset is tuned individually?) Sorry for my misunderstanding. But still I get this question on the typo.
Thanks
Hi @ChengHan111 , you are correct, results from Table 1 is the averaged acc. across multiple datasets.
As for the typo, we use cub.yaml as a base, and overwrite data information (DATA.NAME
, DATA.NUMBER_CLASSES
, DATA.DATAPATH
) in the command line.
Let me know if you have more questions!
Thanks for your response! I still have ~~four~~ TWO doubts, I am wondering if you could help ;)
- ~~As stated in your paper(excellent work btw!), you are using grid search to find the best parameter, I am wondering if the data report is the best fitting combinations VPT currently can rearch (and if it is the last epoch or best epoch result (I think it is best epoch result)); Checked tune_vtab, using the best epoch among all epochs. Thanks!~~
- ~~Seems the Caltech101 on TFDS is completely dead and I tried multiple ways to go through the tensorflow download and prepared step, failed unfortunately. I am wondering if you could kindly provide a compiled version of ~~Caltech101 and~~ Resisc45 (tfrecord version) for reference! Other datasets seem works perfectly to me :)~~
- (Solved! config error, my bad XD ) ~~Even I successfully prepared some of the datasets (Like diabetic_retinopathy_detection for example), sometimes the program just pop out "killed" with no other debugging info, I am wondering if it is a OOM problem. It is possible to get some info on the memory requires to run on some of the datasets? I am reruning on A100s (definitly not a GPU limitation)~~
- ~~Can I ask for a keras and tensorflow version for your setup, seems they are not always matched :)~~ All Solved. Thanks!
@ChengHan111 I'm facing the OOM problem with those with large test sets (retino, camelyon, ...). Would you kindly share the config and what was the error? The batch size is 128.
I'm running on V100. Although I set RAM to 64G, it still gives me the OOM error.
Thanks
@tsly123 Hi, you probably need more RAM, I defaultly set it to 80G and it happens rarely. But I do suggest to have a higher RAM since I still suffer from OOM sometimes.