segmentation_models
segmentation_models copied to clipboard
Poll: Which combination of backbone and architecture provides you with the best results?
As indicated in the title, I'm just curious to see which combination of backbones and architecture gives the best results for the dataset you're working with.
I was using ResNet18 with U-Net on images of 1024*1024 dimensions and recently switched to efficientnetb3 and the results were substantially better.
Anyone else have some winning combinations they'd recommend?
I had the best results with inceptionresnetv2 + unet.
My tests at the time did include some other backbones not supported by this project, but didn't include efficientnetb if I recall correctly, so would be interested in your experiences if you would compare it with inceptionresnetv2.
I had the best results with inceptionresnetv2 + unet.
My tests at the time did include some other backbones not supported by this project, but didn't include efficientnetb if I recall correctly, so would be interested in your experiences if you would compare it with inceptionresnetv2.
hi How can I calculate dice,recall, precision for my model.
@sfarahaninia73 there are already functions that were created by @qubvel that will do this for you during the training process:
import segmentation_models as sm
BACKBONE = 'efficientnetb3'
preprocess_input = sm.get_preprocessing(BACKBONE)
model = sm.Unet(input_shape = (size, size, 3),
backbone_name = BACKBONE,
encoder_weights = 'imagenet',
activation = 'softmax',
classes = nb_classes,
encoder_freeze = True)
from segmentation_models.losses import categorical_focal_dice_loss
from segmentation_models.metrics import precision, recall, iou_score, f1_score, f2_score
from keras.optimizers import Adam
model.compile(optimizer = Adam(lr = .001),
loss = [categorical_focal_dice_loss],
metrics = [precision, recall, iou_score, f1_score, f2_score])
model.summary()
...
# to view it across all epochs, plot it
plt.figure(figsize= (10, 5))
plt.plot(history.history["precision"], label="precision")
plt.plot(history.history["val_precision"], label="val_precision")
plt.title("Training Precision")
plt.xlabel("Epoch #")
plt.ylabel("Precision")
plt.legend(loc="upper right")
plt.show()
plt.figure(figsize= (10, 5))
plt.plot(history.history["recall"], label="recall")
plt.plot(history.history["val_recall"], label="val_recall")
plt.title("Training Recall")
plt.xlabel("Epoch #")
plt.ylabel("Recall")
plt.legend(loc="upper right")
plt.show()
i want calculate this paramet for predict mask and original mask but i think thise code can not calculate.
On Wed, Oct 30, 2019 at 10:05 PM Jordan Pierce [email protected] wrote:
@sfarahaninia73 https://github.com/sfarahaninia73 there are already functions that were created by @qubvel https://github.com/qubvel that will do this for you during the training process:
import segmentation_models as sm BACKBONE = 'efficientnetb3' preprocess_input = sm.get_preprocessing(BACKBONE)
model = sm.Unet(input_shape = (size, size, 3), backbone_name = BACKBONE, encoder_weights = 'imagenet', activation = 'softmax', classes = nb_classes, encoder_freeze = True) from segmentation_models.losses import categorical_focal_dice_lossfrom segmentation_models.metrics import precision, recall, iou_score, f1_score, f2_scorefrom keras.optimizers import Adam
model.compile(optimizer = Adam(lr = .001), loss = [categorical_focal_dice_loss], metrics = [precision, recall, iou_score, f1_score, f2_score])
model.summary() ...
to view it across all epochs, plot it
plt.figure(figsize= (10, 5)) plt.plot(history.history["precision"], label="precision") plt.plot(history.history["val_precision"], label="val_precision") plt.title("Training Precision") plt.xlabel("Epoch #") plt.ylabel("Precision") plt.legend(loc="upper right") plt.show()
plt.figure(figsize= (10, 5)) plt.plot(history.history["recall"], label="recall") plt.plot(history.history["val_recall"], label="val_recall") plt.title("Training Recall") plt.xlabel("Epoch #") plt.ylabel("Recall") plt.legend(loc="upper right") plt.show()
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/qubvel/segmentation_models/issues/205?email_source=notifications&email_token=AK4LJLXALHGOY73UQEHJGDTQRHHWRA5CNFSM4IWX27Z2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECVKK4Y#issuecomment-548054387, or unsubscribe https://github.com/notifications/unsubscribe-auth/AK4LJLWH6SLMXNBKX7GNH7LQRHHWRANCNFSM4IWX27ZQ .
My best results were for U-Net with ResNet50 encoder weights, total_loss = dice_loss + (1 * focal_loss) for multi-class segmentation of remote sensing images.