fuweifu-vtoo
fuweifu-vtoo
How to visualize attention mentioned in BAM and PAM? Which scripts you use? thanks a lot!
I notice that generator the porject use is only LSTMGenerator, i wonder if a basic transformer can be a generator? Hope your response, thanks!
您好,想问一问在模型训练时,avg代表什么? 是代表 avg loss 还是 avg reward 还是 avg penalty呢? avg 的值是应该越大越好吗?
Have you read code for SoftTeacher? This code seems to have nothing to do with ddp_train_gans, but it still use multi-gpu training UDA model. [https://github.com/lhoyer/DAFormer/issues/9](url)
can you offer train data in /nlp08_huggingface_transformers_albert.ipynb to me? thanks a lot hope your response.
### Prerequisite - [X] I have searched [the existing and past issues](https://github.com/open-mmlab/mmyolo/issues) but cannot get the expected help. - [X] I have read the [FAQ documentation](https://mmyolo.readthedocs.io/en/latest/faq.html) but cannot get the...
Why does the language encoder not use bert like grounding DINO, but use CLIP? My question is, why not implement T-Rex2 along the lines of grounding DINO?
Dear author, I have another question for you: In Visual Prompt Encoder, is it stacking three layers of deformable cross-attention layer, then connecting one self attention and one FFN? Or...