YuanLi

Results 38 comments of YuanLi

> @twangnh @yuanli2333 : > Hi ,Wang & Li: > > ``` > I appreciate you update the installation part in README and when i read the code , i...

> @yuanli2333 : > Thanks for your quickly reply , but according to the > out_size = (n - k + 2p)/s + 1 , the default s = stride...

Hi, If you want to train our model with other image size like 384x384, please use: ``` from models.t2t_vit import * model = T2t_vit_14(img_size=384) ```

If you are trying to open the checkpoint, please don't do it. You should use 'torch.load' to load all checkpoint files.

> hi, would you mind releasing the training log for T2t-vit-t-14 training with 8 GPUs? I tried to rerun the script for training T2t-vit-t-14 with 8 GPUs. It gained 0.094...

Hi, thanks for notice. For the three lite variants of T2T-ViT, each layer of Transformer layer is 0.125G MACs. ![image](https://user-images.githubusercontent.com/33076543/116514732-00973080-a8fe-11eb-8d90-ae455bc436a7.png) The MACs of T2T-ViT-7 is 0.125*7+0.7, here 0.7 is for...

Hi, We double checked the MACs of T2T module, and it should be ~0.25G, and we have updated the repo and will update the paper soon.

Hi, I don't know why your visualization looks not good. But I think you can use higher resolution first to do a debug.

Yes, you may be right, we can try to change 1e-8 to a large one, did you try it?

Hi, you should write a dataloader to load your own data, Or if you want to use our repo to train, you should reorganize the data structure of your dataset...