Huzhen
Huzhen
Thank you very much for your careful reply about LDA and SDA ,but I have another question about LDA and SDA for irregular feature map。 In your paper,the LDA and...
您好,我对您关于使用卷积来实现Self-Attention,并因此来替代CNN backbone中的bottleneck这种设计非常感兴趣,但是关于Contextual Transformer block我有一些问题,想向您请教: 1. 基于contextual attention matrix w来attention所有的values map V得到attented feature map,为什么要是用LocalConvolution,而不是直接的矩阵乘法,这样设计的原因是什么呢,并且将contextual attention matrix进行reshape分组之后再与value map进行LocalConvolution, 这个LocalConvolution 具体是怎么实现的呢?:[reshape](https://github.com/JDAI-CV/CoTNet/blob/master/models/cotnet.py#85) - [LocalConvolution](https://github.com/JDAI-CV/CoTNet/blob/master/models/cotnet.py#88) 2. 代码中在static key与contextual dynamic key进行fusion之后,为什么又进行了一个类似Self-Attention的操作呢?这个设计的目标又是什么呢?好像在论文中并没有提关于这里的细节:  3. 最后,我想问的是,模型的前向传播过程中没有出现任何关于position encoding或者position bias的设计,是因为采取了卷积操作替代了之前的Self-Attention机制,由于卷积的捕捉local-range信息的能力,就不再需要position...
Hello, I want to use the under the tools folder 'train_net' script to train the yolof-res101-dc5-1x version of the network, but because the first card of my group's server is...