RongchangLi

Results 9 issues of RongchangLi

Thanks a lot for your sharing of this tool. Now I have so many picture to deal with (Number is 15000*100; size is 256*256.) so can I use GPU to...

thanks for your sharing the pretraied R(2+1)d model on Kinetics. I tried to fine-tune it on smaller datasets like UCF101 or hmdb51, But it's heavily overfitting. So ,can you share...

Hello, I find it get slower after using SpatialCorrelationSampler (from 0.647 to 0.832 to run a iteration). Do you know then possible problems? Maybe it is the cuda(11.4) or cudnn(8.2.4)...

Hi, my env is: python 3.8 pytorch 1.8.0 cudatoolkit 11.4 torchvision 0.9.0 gcc 9.3.0 I tried commands: pip install spatial-correlation-sampler The result is: ``` (torch18) rongchang@BETA:~/extent_tools/Pytorch-Correlation-extension-master$ pip install spatial-correlation-sampler Defaulting...

Thank you for sharing the code. It help a lot. But there is some confusion about the data loading procedure in **charades_video_tsn.py**. Here is the codes: `` n = self.data['datas'][index]['n']...

In the paper, the result of VPT on Cifar100 is **78.8**. But I repeoduce a worse results: **64.39**. Here is my command: `bash configs/VPT/VTAB/ubuntu_train_vpt_vtab.sh experiments/VPT/ViT-B_prompt_vpt_100.yaml ViT-B_16.npz` lr is 0.001, weight...

Hi, here is my envs: torch==1.8 cuda==11.1 The error report is: ` File "/data/Disk_A/jiaye/lrc_code/ddp_std/main.py", line 368, in train scaler.scale(loss).backward() File "/data/Disk_A/jiaye/.conda/envs/torch19/lib/python3.9/site-packages/torch/tensor.py", line 245, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)...

Hello, Thank you for your code. I download the data. But after loading the ntu-T dataset, the info is printed as follows: **_sample_num in train 2320 _class 80 data_path :/data/Disk_B/action_data/self_ske/ntu120/NTU-T...

It is the test code: ``` import torch import torch_pruning as tp import torch.nn as nn import clip.clip as clip clip_model = clip.load('ViT-B/32', device='cpu', jit=False, )[0] model = clip_model.visual.transformer.resblocks.cpu() print(model)...