iumyx2612

Results 14 issues of iumyx2612

**Describe the feature** Able to visualize the Effective Receptive Field (ERF) of a given layer ![image](https://user-images.githubusercontent.com/69593462/177044182-fc3c3639-d73e-4788-a6a6-1f4ff1ba2463.png) Picture taken from Segformer paper: https://arxiv.org/pdf/2105.15203.pdf **Motivation** I want to know why my model...

**Describe the feature** Add Cross-Iteration Batch Normalization in: https://arxiv.org/abs/2002.05712 And Accumulate Gradient for training: https://github.com/WongKinYiu/ScaledYOLOv4/blob/yolov4-large/train.py#L77 Cross-Iteration BN helps model with small batch-size to achieve better results. And Accumulate Gradient helps...

# Describe the feature **Motivation** A clear and concise description of the motivation of the feature. I want to know why my experiments fail **Related resources** Pytorch grad-cam: https://github.com/jacobgil/pytorch-grad-cam MMDetection...

Planned feature
be volunteered

Why GIoULoss still used in GFL? To my understanding, GFL contains QFL and DFL. QFL for joint classification and IoU score while DFL for bbox regression ![image](https://user-images.githubusercontent.com/69593462/160515239-56b9d991-c808-4fe5-8bb8-67fb9d4aab6b.png) So DFL is...

I trained autoalbument with 11 epochs on my dataset and I think that's enough (the Average Parameter change oscillating with small amount). The output config has an augmentation with really...

How do I resume training?

Hello I'm building a multi task classification, my model contains 1 feature extractor and 2 classification heads, so when implementing `__getitem()__` it needs to return 1 image with 2 labels...

`ATSSHead` is inherited from `AnchorHead`, which uses `anchor_generator` of type `AnchorGenerator`. However, `FCOSHead` is inherited from `AnchorFreeHead`, which uses `anchor_generator` of type `MlvlPointGenerator`, and this is the Anchor-free version. There...

SPPCSP is more computation heavy than normal SPP, is it better in term of AP?

Why did you choose Layer Attention instead of normal Channel Attention? Task-interactive features are concatenated after **N** consecutive Conv layers, then using Channel Attention could further separate each channels to...