segmentation_models icon indicating copy to clipboard operation
segmentation_models copied to clipboard

Poll: Which combination of backbone and architecture provides you with the best results?

Open JordanMakesMaps opened this issue 6 years ago • 5 comments

As indicated in the title, I'm just curious to see which combination of backbones and architecture gives the best results for the dataset you're working with.

I was using ResNet18 with U-Net on images of 1024*1024 dimensions and recently switched to efficientnetb3 and the results were substantially better.

Anyone else have some winning combinations they'd recommend?

JordanMakesMaps avatar Sep 14 '19 18:09 JordanMakesMaps

I had the best results with inceptionresnetv2 + unet.

My tests at the time did include some other backbones not supported by this project, but didn't include efficientnetb if I recall correctly, so would be interested in your experiences if you would compare it with inceptionresnetv2.

theroggy avatar Oct 01 '19 07:10 theroggy

I had the best results with inceptionresnetv2 + unet.

My tests at the time did include some other backbones not supported by this project, but didn't include efficientnetb if I recall correctly, so would be interested in your experiences if you would compare it with inceptionresnetv2.

hi How can I calculate dice,recall, precision for my model.

sfarahaninia73 avatar Oct 30 '19 07:10 sfarahaninia73

@sfarahaninia73 there are already functions that were created by @qubvel that will do this for you during the training process:

import segmentation_models as sm

BACKBONE = 'efficientnetb3'
preprocess_input = sm.get_preprocessing(BACKBONE)

model = sm.Unet(input_shape = (size, size, 3),
                backbone_name = BACKBONE, 
                encoder_weights = 'imagenet', 
                activation = 'softmax', 
                classes = nb_classes,
                encoder_freeze = True)

from segmentation_models.losses import categorical_focal_dice_loss
from segmentation_models.metrics import precision, recall, iou_score, f1_score, f2_score
from keras.optimizers import Adam

model.compile(optimizer = Adam(lr = .001), 
              loss = [categorical_focal_dice_loss], 
              metrics = [precision, recall, iou_score, f1_score, f2_score])

model.summary()

...

# to view it across all epochs, plot it
plt.figure(figsize= (10, 5))
plt.plot(history.history["precision"], label="precision")
plt.plot(history.history["val_precision"], label="val_precision")
plt.title("Training Precision")
plt.xlabel("Epoch #")
plt.ylabel("Precision")
plt.legend(loc="upper right")
plt.show()

plt.figure(figsize= (10, 5))
plt.plot(history.history["recall"], label="recall")
plt.plot(history.history["val_recall"], label="val_recall")
plt.title("Training Recall")
plt.xlabel("Epoch #")
plt.ylabel("Recall")
plt.legend(loc="upper right")
plt.show()

JordanMakesMaps avatar Oct 30 '19 18:10 JordanMakesMaps

i want calculate this paramet for predict mask and original mask but i think thise code can not calculate.

On Wed, Oct 30, 2019 at 10:05 PM Jordan Pierce [email protected] wrote:

@sfarahaninia73 https://github.com/sfarahaninia73 there are already functions that were created by @qubvel https://github.com/qubvel that will do this for you during the training process:

import segmentation_models as sm BACKBONE = 'efficientnetb3' preprocess_input = sm.get_preprocessing(BACKBONE)

model = sm.Unet(input_shape = (size, size, 3), backbone_name = BACKBONE, encoder_weights = 'imagenet', activation = 'softmax', classes = nb_classes, encoder_freeze = True) from segmentation_models.losses import categorical_focal_dice_lossfrom segmentation_models.metrics import precision, recall, iou_score, f1_score, f2_scorefrom keras.optimizers import Adam

model.compile(optimizer = Adam(lr = .001), loss = [categorical_focal_dice_loss], metrics = [precision, recall, iou_score, f1_score, f2_score])

model.summary() ...

to view it across all epochs, plot it

plt.figure(figsize= (10, 5)) plt.plot(history.history["precision"], label="precision") plt.plot(history.history["val_precision"], label="val_precision") plt.title("Training Precision") plt.xlabel("Epoch #") plt.ylabel("Precision") plt.legend(loc="upper right") plt.show()

plt.figure(figsize= (10, 5)) plt.plot(history.history["recall"], label="recall") plt.plot(history.history["val_recall"], label="val_recall") plt.title("Training Recall") plt.xlabel("Epoch #") plt.ylabel("Recall") plt.legend(loc="upper right") plt.show()

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/qubvel/segmentation_models/issues/205?email_source=notifications&email_token=AK4LJLXALHGOY73UQEHJGDTQRHHWRA5CNFSM4IWX27Z2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECVKK4Y#issuecomment-548054387, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK4LJLWH6SLMXNBKX7GNH7LQRHHWRANCNFSM4IWX27ZQ .

sfarahaninia73 avatar Oct 31 '19 05:10 sfarahaninia73

My best results were for U-Net with ResNet50 encoder weights, total_loss = dice_loss + (1 * focal_loss) for multi-class segmentation of remote sensing images.

tonyboston-au avatar Jun 09 '21 03:06 tonyboston-au