Qi Han

Results 16 comments of Qi Han

I meet the same problem when using docker. ls -al xxx shows: ``` -rwxrwxrwx 1 root root 389991 Feb 10 16:31 ../../../../data/kinetics400/videos_train/balloon_blowing/maymQ_gxL7w_000102_000112.mp4 ``` and ffprobe -i xxx shows: ``` ffprobe...

Hi, I also meet the question about training acc. I train model using pertrained weight from ViT, but the acc is lower than a baseline model (only ViT backbone, without...

@zmy1116 Thanks for sharing the training configuration. I do the same thing that finding a good learning rate with warm up. In my setting, I use lr=0.01 and warm up...

Thanks for @zmy1116 's sharing. I will share the results on kinetics400 after testing this configuration.

@zmy1116 Hi, As for the there crop you mentioned above. I think, eventually, the ConvNet also takes 224x224 input in the training phase. So could you forward the transformer with:...

您好, 1. 这两者在确定的量化尺度下是等价的,您可以参考传统霍夫变换的OpenCV实现去深入理解。 每条直线投影到一个量化的点上,等价于每个像素点根据量化尺度遍历去分别累加到对应的点上。 2. DHT的前向传播是无参数的,只是对特征进行投影,改变了空间分布。 同理反向传播也是无参数的,backward只是把梯度回传,用于链式求导传递。

@xuecheng990531 与传统霍夫变换不同的地方在于传统霍夫变换操作的边缘激活图或者其他属性图,深度霍夫变换是对feature进行累加,对应到网络中一个forward backward完整模块。

Hi, currently the code is not supported for windows. You can use a pure Ubuntu system to reproduce our semantic line detection that only installs CUDA and pytorch.

We use cuda9/10 with pytorch 1.3~1.6. It not require cuda11.

I think the trouble is not the version between `ninja` and `PyTorch`. We use `pytorch>=1.3 and