Rustem Galiullin
Rustem Galiullin
@sushmanthreddy just use my fork in the meantime: `pip install git+https://github.com/Rusteam/segmentation_models.pytorch.git@sam`
> @Rusteam Do you have any reference for learning curves with another backbone on your task, e.g. resnet18/34 for comparison? This one includes `mit_b1` backbone with Unet 
> @Rusteam Do you have any reference for learning curves with another backbone on your task, e.g. resnet18/34 for comparison? ok, I will do it shortly
@qubvel removed sam decoder, I'll see if I get time to finish its implementation in the nearest future and add it in a new PR if I do.
> Btw, how many output tensors does the encoder have? Is it more than the usual 5? Is it 12 outputs? It's actually just one output tensor at the moment....
> @Rusteam `model = smp.SAM( encoder_name="sam-vit_b" encoder_weights="sa-1b", weights=None, image_size=64, decoder_multimask_output=decoder_multiclass_output, classes=n_classes, )` > > what we should add in the weights variable? we've decided to remove `smp.SAM` for the moment...
@hxgqh you need to install [annlite](https://pypi.org/project/annlite/) package. I had the same issue and resolved it by installing `annlite`.
check this file `flash/core/utilities/imports.py`: ``` _IMAGE_AVAILABLE = all( [ _TORCHVISION_AVAILABLE, _TIMM_AVAILABLE, _PIL_AVAILABLE, _ALBUMENTATIONS_AVAILABLE, _PYSTICHE_AVAILABLE, _SEGMENTATION_MODELS_AVAILABLE, ] ) ``` If any of the above is False, then it throws this error....
> I wonder if you could also add a HOTA metric to this > > https://github.com/cheind/py-motmetrics/blob/8c25d76c03ad77e0c8a180717be75c53618a5541/motmetrics/tests/test_metrics.py#L422 > > test? I guess we could derive the necessary GT-values by running the...
> you mean to add it to `mm.metrics.motchallenge_metrics`? > > I believe one could make a separate list of metrics for HOTA? such that we can write > > ```...