Zhe Chen
Zhe Chen
> here is my command: (vit_adapter) lidexuan@aa-SYS-4029GP-TRT:/data/lidexuan/ViT-Adapter/detection$ sh dist_test.sh configs/htc++/htc++_beit_adapter_large_fpn_3x_coco.py htc++_beit_adapter_large_fpn_3x_coco.pth.tar 8 --eval bbox segm dist_test.sh: 8: dist_test.sh: Bad substitution > > any idea? Hi, you can use `bash` instead...
Hi, you can set `with_cp=True` (in backbone and head) to save GPU memory. For example, when the input image is 512x512, the BEiT-Adapter-L-Mask2Former requires about 15G memory with 1 image...
> @czczup Hi,i would like to save gpu memory,so i change the crop size from 512 to 256,when training voc2007 dataset,it occurs other issue,assertionerror at the iteration of evaluation(160000/20000)。And i...
> @czczup Thanks for your reply.I used to change the crop size in config file and datasets setting.But I can not find the img_size in backbone of vit_adapter.py.If I need...
Hi, what is the version of your CUDA? You can use `nvcc -V` to print the information of CUDA.
@ccqlx You could run the [test.py](https://github.com/czczup/ViT-Adapter/blob/main/detection/ops/test.py) to check if deformable attention is installed successfully. Run it like this: ``` cd detection/ops/ python test.py ```
I think the deformable attention is not compiled successfully. You can try this: replace line 11 ``` import MultiScaleDeformableAttention as MSDA ``` in the [ms_deform_attn_func.py](https://github.com/czczup/ViT-Adapter/blob/main/detection/ops/functions/ms_deform_attn_func.py) with ``` from mmcv.ops.multi_scale_deform_attn import...
I think now you can try to run inference with a checkpoint
@Akinpzx Hi, thanks for your attention. Could you share the config?
@Akinpzx 我跑一下这个数据集试试,晚一些给你回复