FastSAM icon indicating copy to clipboard operation
FastSAM copied to clipboard

Spliting model into Encoder and Decoder

Open vietanhdev opened this issue 1 year ago • 10 comments

Hello! I really like this project. Do you plan to support splitting this model into Encoder and Decoder like the original SAM? In that way, the Decoder part can be run very fast, and we can apply it to some applications like AnyLabeling. I'd love to help integrate into AnyLabeling if we can find a way to split the model. Thank you very much!

vietanhdev avatar Jun 22 '23 16:06 vietanhdev

We can also convert the model to ONNX to remove the dependence on PyTorch, which has a very big size compared to ONNXRuntime. My code for the original SAM is here.

vietanhdev avatar Jun 22 '23 17:06 vietanhdev

Hello! I really like this project. Do you plan to support splitting this model into Encoder and Decoder like the original SAM? In that way, the Decoder part can be run very fast, and we can apply it to some applications like AnyLabeling. I'd love to help integrate into AnyLabeling if we can find a way to split the model. Thank you very much!

Our approach is divided into two phases: All-instance Segment and Prompt-guided Selection. The first stage can be considered as the encoder and the second stage as the decoder. The integration into AnyLabeling is also feasible. Thank you for your suggestion. We will package it as a SAM-like Encoder and Decoder in the near future.

an-yongqi avatar Jun 23 '23 02:06 an-yongqi

We can also convert the model to ONNX to remove the dependence on PyTorch, which has a very big size compared to ONNXRuntime. My code for the original SAM is here.

We have referred to the YOLOv8 to onnx tutorial to convert onnx, and the code to inference directly with onnx is planned to be released in the near future.

an-yongqi avatar Jun 23 '23 02:06 an-yongqi

Hello! I really like this project. Do you plan to support splitting this model into Encoder and Decoder like the original SAM? In that way, the Decoder part can be run very fast, and we can apply it to some applications like AnyLabeling. I'd love to help integrate into AnyLabeling if we can find a way to split the model. Thank you very much!

Our approach is divided into two phases: All-instance Segment and Prompt-guided Selection. The first stage can be considered as the encoder and the second stage as the decoder. The integration into AnyLabeling is also feasible. Thank you for your suggestion. We will package it as a SAM-like Encoder and Decoder in the near future.

Thanks. I'm waiting for it too.

asizdzbest avatar Jun 23 '23 03:06 asizdzbest

We have created a new branch to integrate AnyLabeling. We have divided the functionality into three functions: point_prompt, box_prompt, and text_prompt, which can be seen as decoders for the model. Could you provide a more detailed description of the specific functionalities we need to encapsulate?😊

YinglongDu avatar Jun 25 '23 08:06 YinglongDu

@YinglongDu

  • It's best if you can divide the model into 2 small models: Encoder and Decoder. Encoder part (your backbone) can be calculated separately. In this way, we can precalculate the encoder, each time user adds some boxes or points, we only have to calculate the decoder again, with the result from the encoder, and the prompt. This action can be really fast.
  • Convert model to ONNX format to reduce the dependencies. The binary distribution of AnyLabeling is ~130MB (you can see it here). That means users only have to download ~130MB to run the model. If we use .pth model, we need PyTorch, which is about 700MB to 1GB+, this will cost much more time and space to download the program.

We can calculate the encoder as following:

image_embedding = run_encoder(image)

And calculate the decoder after that:

image = run_decoder(
      image_embedding,
      prompt,
)

vietanhdev avatar Jun 25 '23 12:06 vietanhdev

We have released the API for FastSAM. Please see for details at fastsam/decoder.py.

import model

from fastsam import FastSAM ,FastSAMDecoder
model = FastSAM('./weights/FastSAM.pt')
fastsam = FastSAMDecoder(model,device=DEVICE,retina_masks=True,imgsz=1024,conf=0.4,iou=0.9)

Encoder

image_embedding = fastsam.run_encoder(image)

Decoder

ann = fastsam.run_decoder(image_embedding,point_prompt=[[506, 340]], point_label=[1])

YinglongDu avatar Jun 27 '23 08:06 YinglongDu

Is there any update on the onnx export part for both the decoder and encoder parts?

mario-dg avatar Jun 28 '23 18:06 mario-dg

any update?

morestart avatar Jul 03 '23 08:07 morestart

We have released the API for FastSAM. Please see for details at fastsam/decoder.py.

import model

from fastsam import FastSAM ,FastSAMDecoder
model = FastSAM('./weights/FastSAM.pt')
fastsam = FastSAMDecoder(model,device=DEVICE,retina_masks=True,imgsz=1024,conf=0.4,iou=0.9)

Encoder

image_embedding = fastsam.run_encoder(image)

Decoder

ann = fastsam.run_decoder(image_embedding,point_prompt=[[506, 340]], point_label=[1])

Thanks for the updates. However, it seems that the 'image_embeddings' are actually wrapped results including masks and boxes. Is it possible to get the intermediate results such as the encoded feature maps, such as that in the original SAM repository? THe encoded features will be more valuable for adapting to down-stream tasks.

Looking forward to your reply.

ggsDing avatar Jul 04 '23 07:07 ggsDing