peterpaniff
peterpaniff
> Have you trained on COCO ? I followed the author's experiment settings, but AP50 on test-dev only have 31.6%, which is 40.4% in the paper, I am very confused...
> _No description provided._ do you have trained the model on COCO, let's chat
> I have the same problem. I trained on voc0712,tested in voc07, but mAP only have 55.1%. do you have trained the model on COCO? Let's chat
> > > Have you trained on COCO ? I followed the author's experiment settings, but AP50 on test-dev only have 31.6%, which is 40.4% in the paper, I am...
> > > > > Have you trained on COCO ? I followed the author's experiment settings, but AP50 on test-dev only have 31.6%, which is 40.4% in the paper,...
it contains DDB(depth-wise dense blocks) subnetworks, like denseNet, too much concatenation operations lead to the huge use of GPU memory, thus it slow down the training process.
> # step 2 > from openprompt.plms import load_plm plm, tokenizer, model_config, WrapperClass = load_plm("bert", "bert-base-cased") 如何修改代码,加载本地模型 在[huggingface](https://huggingface.co/)上搜索关键字bert-base-cased,进入https://huggingface.co/bert-base-cased/tree/main,将文件保存下来保存到某个目录,比如你保存到/home/123/bert_model/,这个路径根据你的需要进行修改,然后将这个路径替换掉"bert-base-cased"即可。