Kai Han
Kai Han
@Idiom1999 整体的mAP怎么样? 我们在COCO数据集的设定:以mmdet为框架,Pyramid ViG-S和Retinanet为例子。参数设定如下: (1) 将backbone的bn换成mmdet的sybn; (2) Img_per_gpu=4, gpu=8, lr=1.2e-4, wd:0.2, drop_path=0.15, k=15; (3) 预训练的checkpoint就用imagenet上的224x224的vig-s的就行,面对检测的1333x800的输入,把checkpoint中的相对位置编码直接bicubic/bilinear插值即可;
1. Run inference and get the indice of k-NN: https://github.com/huawei-noah/Efficient-AI-Backbones/blob/f4ffbe5fa41934ae98768bc7c720faf5a661b0b4/vig_pytorch/gcn_lib/torch_vertex.py#L128 2. Draw the centroid and its k-NN nodes using any tool like matplolib or simply PPT.
Could you show more log information?
It seems you run ResNet101. Please run as `python -m torch.distributed.launch --nproc_per_node=1 train.py /path/to/imagenet/ --model pvig_s_224_gelu --sched cosine --epochs 300 --opt adamw -j 8 --warmup-lr 1e-6 --mixup .8 --cutmix 1.0...
您好,请仔细阅读readme,谢谢!测试程序包含在train.py里面,调用方式: ``` python train.py /path/to/imagenet/ --model pvig_s_224_gelu -b 256 --pretrain_path /path/to/pretrained/model/ --evaluate ```
We followed the [official pytorch example](https://github.com/pytorch/examples/tree/main/imagenet): move and extract the training and validation images to labeled subfolders, using [the following shell script](https://github.com/pytorch/examples/blob/main/imagenet/extract_ILSVRC.sh)
``` dir/ train/ ... val/ n01440764/ ILSVRC2012_val_00000293.JPEG ... ... ```
> We followed the [official pytorch example](https://github.com/pytorch/examples/tree/main/imagenet): move and extract the training and validation images to labeled subfolders, using [the following shell script](https://github.com/pytorch/examples/blob/main/imagenet/extract_ILSVRC.sh) Please read this example first. It seems...