awesome-knowledge-distillation-for-object-detection icon indicating copy to clipboard operation
awesome-knowledge-distillation-for-object-detection copied to clipboard

Implementation of "Structural Knowledge Distillation for Object Detection"

Open minhhotboy9x opened this issue 1 year ago • 2 comments

I read this paper Structural Knowledge Distillation for Object Detection and implemented KD with yolov8 but the result was very bad. I think that the problem was the way I scaled the feature map of student and teacher models. I combined both feature maps to scale to the range [0, 1]. Has sb used this method in the paper and had a better result than the original model? I just want to see how KD loss is implemented.

minhhotboy9x avatar Oct 10 '23 14:10 minhhotboy9x

Hi @minhhotboy9x ! I am trying to KD yolov8x into yolov8n too, but i dont understand how to extract feature maps during training. Could be kind enough to tell me how you extracted them? Thanks!

mzikkhan avatar Oct 18 '23 04:10 mzikkhan

@mzikkhan I'm using an old version of Ultralytics yolov8 so if you want to extract them, you may have to find a similar file code in the latest version. In my work, I accessed ultralytics/nn/tasks.py, at _forward_once function of class BaseModel I added a list mask_id as the parameter which is the indexes of feature maps I want to extract. In the original loop for inference, a variable m is used for looping each module in the model to infer. I just extract the feature maps in which those indexes' mask is in my mask_id. You can see the code below for more easy understanding.

 class BaseModel(nn.Module):
    """
    The BaseModel class serves as a base class for all the models in the Ultralytics YOLO family.
    """

    def forward(self, x, profile=False, visualize=False, mask_id = []):
        """
        Forward pass of the model on a single scale.
        Wrapper for `_forward_once` method.

        Args:
            x (torch.Tensor): The input image tensor
            profile (bool): Whether to profile the model, defaults to False
            visualize (bool): Whether to return the intermediate feature maps, defaults to False

        Returns:
            (torch.Tensor): The output of the network.
        """
        return self._forward_once(x, profile, visualize, mask_id)

    def _forward_once(self, x, profile=False, visualize=False, mask_id = []):
        """
        Perform a forward pass through the network.

        Args:
            x (torch.Tensor): The input tensor to the model
            profile (bool):  Print the computation time of each layer if True, defaults to False.
            visualize (bool): Save the feature maps of the model if True, defaults to False

        Returns:
            (torch.Tensor): The last output of the model.
        """
        # print(mask_id)
        y, dt = [], []  # outputs
        feature_mask = [] # outputs of mask i_th
        for m in self.model:
            if m.f != -1:  # if not from previous layer
                x = y[m.f] if isinstance(m.f, int) else [x if j == -1 else y[j] for j in m.f]  # from earlier layers
            if profile:
                self._profile_one_layer(m, x, dt)
            x = m(x)  # run
            y.append(x if m.i in self.save else None)  # save output
            if visualize:
                LOGGER.info('visualize feature not yet supported')
                # TODO: feature_visualization(x, m.type, m.i, save_dir=visualize)
            if m.i in mask_id:
                feature_mask.append(x)
        if feature_mask:
            return x, feature_mask
        # print(f'------------{type(x)}------------{len(x)}----------------')
        return x
 

minhhotboy9x avatar Oct 20 '23 08:10 minhhotboy9x