mmrotate icon indicating copy to clipboard operation
mmrotate copied to clipboard

create the block of eca in Dev 1.x

Open wqe123321 opened this issue 2 years ago • 3 comments

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

I want to add an attention module to the Neck part, in order to enhance the attention to features.

Modification

code is following:

import torch.nn as nn import torch.nn.functional as F from mmcv.cnn import ConvModule from mmcv.runner import BaseModule, auto_fp16

from ..builder import NECKS

from mmrotate.models.builder import ROTATED_NECKS

class eca_layer(nn.Module):#eca注意力 def init(self, channel, k_size=3): super(eca_layer, self).init() # super类的作用是继承的时候,调用含super的哥哥的基类__init__函数。 self.avg_pool = nn.AdaptiveAvgPool2d(1) # 全局平均池化 self.max_pool = nn.AdaptiveMaxPool2d(1) self.conv = nn.Conv1d(1, 1, kernel_size=k_size, padding=(k_size - 1) // 2, bias=False) # 基于1*1卷积学习通道之间的信息 self.sigmoid = nn.Sigmoid() # 激活函数

def forward(self, x):
    # x: input features with shape [b, c, h, w]
    b, c, h, w = x.size()  # b代表b个样本,c为通道数,h为高度,w为宽度
    # feature descriptor on the global spatial information
    y = self.avg_pool(x)
    # Two different branches of ECA module
    # torch.squeeze()这个函数主要对数据的维度进行压缩,torch.unsqueeze()这个函数 主要是对数据维度进行扩充
    y = self.conv(y.squeeze(-1).transpose(-1, -2)).transpose(-1, -2).unsqueeze(-1)
    # Multi-scale information fusion多尺度信息融合
    y = self.sigmoid(y)
    # 原网络中克罗内克积,也叫张量积,为两个任意大小矩阵间的运算
    return x * y.expand_as(x)

@ROTATED_NECKS.register_module() class ECA(BaseModule): #eca注意力 def init(self, in_channels, out_channels, num_outs, start_level=0, end_level=-1, add_extra_convs=False, relu_before_extra_convs=False, no_norm_on_lateral=False, conv_cfg=None, norm_cfg=None, act_cfg=None, upsample_cfg=dict(mode='nearest'), init_cfg=dict( type='Xavier', layer='Conv2d', distribution='uniform')): super(FPN_eca, self).init(init_cfg) assert isinstance(in_channels, list) self.in_channels = in_channels self.out_channels = out_channels self.num_ins = len(in_channels) self.num_outs = num_outs self.relu_before_extra_convs = relu_before_extra_convs self.no_norm_on_lateral = no_norm_on_lateral self.fp16_enabled = False self.upsample_cfg = upsample_cfg.copy()

    if end_level == -1 or end_level == self.num_ins - 1:
        self.backbone_end_level = self.num_ins
        assert num_outs >= self.num_ins - start_level
    else:
        # if end_level is not the last level, no extra level is allowed
        self.backbone_end_level = end_level + 1
        assert end_level < self.num_ins
        assert num_outs == end_level - start_level + 1
    self.start_level = start_level
    self.end_level = end_level
    self.add_extra_convs = add_extra_convs
    assert isinstance(add_extra_convs, (str, bool))
    if isinstance(add_extra_convs, str):
        # Extra_convs_source choices: 'on_input', 'on_lateral', 'on_output'
        assert add_extra_convs in ('on_input', 'on_lateral', 'on_output')
    elif add_extra_convs:  # True
        self.add_extra_convs = 'on_input'

    self.lateral_convs = nn.ModuleList()
    self.fpn_convs = nn.ModuleList()
    self.fpn_att = nn.ModuleList()#ModeleList,可以存放很多函数,调用的时候类似数组用下标调用,比如fpn_att[i]


    for i in range(self.start_level, self.backbone_end_level):#遍历特征图,为了获取特征图通道数
        l_conv = ConvModule(
            in_channels[i],  # 输入通道数
            out_channels,#输出通道数
            1,
            conv_cfg=conv_cfg,
            norm_cfg=norm_cfg if not self.no_norm_on_lateral else None,
            act_cfg=act_cfg,
            inplace=False)
        fpn_conv = ConvModule(
            out_channels,
            out_channels,
            3,
            padding=1,
            conv_cfg=conv_cfg,
            norm_cfg=norm_cfg,
            act_cfg=act_cfg,
            inplace=False)
        s_layer = eca_layer(out_channels)

        self.lateral_convs.append(l_conv)
        self.fpn_convs.append(fpn_conv)
        self.fpn_att.append(s_layer)#容器fpn_att存放了每一层特征图的注意力函数,因为每层注意力输入通道数不一样

#包含了每一个特征图的操作 # add extra conv layers (e.g., RetinaNet) extra_levels = num_outs - self.backbone_end_level + self.start_level if self.add_extra_convs and extra_levels >= 1: for i in range(extra_levels): if i == 0 and self.add_extra_convs == 'on_input': in_channels = self.in_channels[self.backbone_end_level - 1] else: in_channels = out_channels extra_fpn_conv = ConvModule( in_channels, out_channels, 3, stride=2, padding=1, conv_cfg=conv_cfg, norm_cfg=norm_cfg, act_cfg=act_cfg, inplace=False) self.fpn_convs.append(extra_fpn_conv)

@auto_fp16()
def forward(self, inputs):
    """Forward function."""
    assert len(inputs) == len(self.in_channels)

    # build laterals
    laterals = [#backbone输出特征图
        lateral_conv(inputs[i + self.start_level])
        for i, lateral_conv in enumerate(self.lateral_convs)
    ]
    laterals[0] = self.fpn_att[0](laterals[0])#将原始特征图经过注意力操作后,覆盖原始特征图,得到新的laterals[0]
    laterals[1] = self.fpn_att[1](laterals[1])
    laterals[2] = self.fpn_att[2](laterals[2])
    laterals[3] = self.fpn_att[3](laterals[3])
        #这里才是对特征图进行操作!!!!!!!!!!!!!!!!!!!!
    # build top-down path
    used_backbone_levels = len(laterals)
    for i in range(used_backbone_levels - 1, 0, -1):
        # In some cases, fixing `scale factor` (e.g. 2) is preferred, but
        #  it cannot co-exist with `size` in `F.interpolate`.
        if 'scale_factor' in self.upsample_cfg:
            # fix runtime error of "+=" inplace operation in PyTorch 1.10
            laterals[i - 1] = laterals[i - 1] + F.interpolate(
                laterals[i], **self.upsample_cfg)
        else:
            prev_shape = laterals[i - 1].shape[2:]
            laterals[i - 1] = laterals[i - 1] + F.interpolate(
                laterals[i], size=prev_shape, **self.upsample_cfg)

    # build outputs
    # part 1: from original levels
    outs = [
        self.fpn_convs[i](laterals[i]) for i in range(used_backbone_levels)
    ]
    # part 2: add extra levels
    if self.num_outs > len(outs):
        # use max pool to get more levels on top of outputs
        # (e.g., Faster R-CNN, Mask R-CNN)
        if not self.add_extra_convs:
            for i in range(self.num_outs - used_backbone_levels):
                outs.append(F.max_pool2d(outs[-1], 1, stride=2))
        # add conv layers on top of original feature maps (RetinaNet)
        else:
            if self.add_extra_convs == 'on_input':
                extra_source = inputs[self.backbone_end_level - 1]
            elif self.add_extra_convs == 'on_lateral':
                extra_source = laterals[-1]
            elif self.add_extra_convs == 'on_output':
                extra_source = outs[-1]
            else:
                raise NotImplementedError
            outs.append(self.fpn_convs[used_backbone_levels](extra_source))
            for i in range(used_backbone_levels + 1, self.num_outs):
                if self.relu_before_extra_convs:
                    outs.append(self.fpn_convs[i](F.relu(outs[-1])))
                else:
                    outs.append(self.fpn_convs[i](outs[-1]))
    return tuple(outs)

wqe123321 avatar Feb 20 '23 14:02 wqe123321

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
18 out of 24 committers have signed the CLA.

:white_check_mark: liuyanyi
:white_check_mark: zytx121
:white_check_mark: RangiLyu
:white_check_mark: yuyi1005
:white_check_mark: yxzhao2022
:white_check_mark: nijkah
:white_check_mark: YanxingLiu
:white_check_mark: zcablii
:white_check_mark: DapengFeng
:white_check_mark: fengshiwest
:white_check_mark: vansin
:white_check_mark: qianlian-mozi
:white_check_mark: kitecats
:white_check_mark: CSberlin
:white_check_mark: Li-Qingyun
:white_check_mark: crazysteeaam
:white_check_mark: JosonChan1998
:white_check_mark: jamiechoi1995
:x: jbwang1997
:x: k-papadakis
:x: RangeKing
:x: austinmw
:x: DonggeunYu
:x: yangxue0827
You have signed the CLA already but the status is still pending? Let us recheck it.

CLAassistant avatar Feb 20 '23 14:02 CLAassistant

Hi @wqe123321 , Thanks for your kind PR. It seems that your branch is out of date and causes much difference. Now, we require all the PRs to be merged into the development branch, i.e., dev branch for MMRotate 0.x and dev-1.x branch for MMRotate 1.x. If your modification is not checkout from these two branches, they may cause conflicts and you should add your modification based on these two branches. If your modification is checkout from these two branches, your branch might be out of date. In such a case Please rebase your branch to the target dev or dev-1.x branch. You can simply do that by modifying your pull behavior:

git config --local --add pull.rebase true  # make rebase a default behavior in pull

Then you can git pull and rebase the dev branch by

git pull dev-1.x  # if your modification is for MMRotate 1.x
git pull dev  # if your modification is for MMRotate 0.x

After rebase, you might need to add --force option when pushing code, e.g.,

git push [remote name] [you branch] --force

zytx121 avatar Feb 24 '23 00:02 zytx121

Hi @wqe123321 !We are grateful for your efforts in helping improve this open-source project during your personal time.

Welcome to join OpenMMLab Special Interest Group (SIG) private channel on Discord, where you can share your experiences, ideas, and build connections with like-minded peers. To join the SIG channel, simply message moderator— OpenMMLab on Discord or briefly share your open-source contributions in the #introductions channel and we will assist you. Look forward to seeing you there! Join us :https://discord.gg/UjgXkPWNqA If you have a WeChat account,welcome to join our community on WeChat. You can add our assistant :openmmlabwx. Please add "mmsig + Github ID" as a remark when adding friends:)

Thank you again for your contribution❤

OpenMMLab-Assistant001 avatar Apr 13 '23 03:04 OpenMMLab-Assistant001