Tony Boston
Tony Boston
I have the same error @lewisdonovan @yoyoshuang
Great software @jcjohnson. Do you have any suggestions on how to fix this issue?
@lewisdonovan @yoyoshuang @jcjohnson Solved the problem by following comments here: https://github.com/jcjohnson/torch-rnn/issues/58 :)
Thanks Jordan. Existing code looks like this: `BACKBONE = 'resnet50'` `model = sm.Unet(BACKBONE, encoder_weights='imagenet', classes=n_classes, activation=activation, input_shape=(None, None, N))` which downloads encoder weights from: https://github.com/qubvel/classification_models/releases/download/0.0.1/resnet50_imagenet_1000_no_top.h5 How do you load weights...
Thanks Jordan - I get the idea but the implementation is more difficult! For the segmentation_models unet model: `BACKBONE = 'resnet50'` `model = sm.Unet(BACKBONE, encoder_weights='imagenet', classes=n_classes, activation=activation, input_shape=(None, None, N))`...
Can't work out how the layer names correspond as they look so different. Unet layer names are neat and tidy but ResNet50 names are messy... Unet layer names: `data bn_data...
Thanks for your help @JordanMakesMaps and @pluniak. @pluniak - these sound like a more doable options. I have move on to other things for now but may come back to...
Thanks @adolfogc. I've moved on from this as the differences in performance using weights from different sources appear to be minor, but thanks for your suggestions.
You could try pycm: https://pypi.org/project/pycm/ Supports calculation of F1 scores by class and many other statistics... "PyCM is a multi-class confusion matrix library written in Python that supports both input...
Hi @elliestath I have noticed this as well and don't know why the differences occur. Sometimes pycm gives better stats (e.g. overall accuracy and mean F1) than sm and sometimes...