pretrained-models.pytorch
pretrained-models.pytorch copied to clipboard
vggm not usable
Hi,
currently vggm
is not an usable architecture even though it is already implemented. Could you please fix it?
Thank you!
@noovd I could not make vggm work in pytorch sorry. That is why it is not included in the API. It uses old torch7 layers ported in pytorch (which does not work with cuda).
I encourage you to evaluate it on CPU (and maybe port the old torch7 layer). As far as I remember, scores are similar to the caffe implementation.
Could you be more precise as to what is broken? I am actually able to use your 'vggm' implementation with the given imagenet pretrained weights on gpu but the results I get are unexpectedly bad for several evaluation tasks eventhough i took normalization and RGB -> BGR into account. I was wondering if you could help me out there? Is this the part which is not working?
@noovd Did you ensure that the input image is in [0,255] range before you apply normalization?
I use the following code transformations = transforms.Compose([ transforms.Scale(224), transforms.ToTensor(), transforms.Lambda(lambda x: x.mul(255)) ]) and then subtract the mean value
In my case I am able to fine-tune the model without any issue. Hope this help solve your problem.
Do we need to convert the images from RGB -> BGR when using this VGGM model?
@ntuyt Yes you do: https://github.com/Cadene/pretrained-models.pytorch/blob/master/pretrainedmodels/models/vggm.py#L14
Hi, I have another question. Since input of original VGGM 224x224. But in your code, the input size is 221x221. Does it influence the performance. Meanwhile, I wonder the provided pretrained model is converted from which model?