Cyrill Küttel
Cyrill Küttel
> > Loading model... Traceback (most recent call last): File "/Users/le/Downloads/MobileSAM-cmarschner-convert/./scripts/convert_pytorch_mobile.py", line 40, in embedding_model_ts = torch.jit.script( ^^^^^^^^^^^^^^^^^ ....... > > File "/usr/local/lib/python3.11/site-packages/torch/jit/frontend.py", line 359, in build_param_list raise NotSupportedError(ctx_range, _vararg_kwarg_err)...
I was able to implement it in C++. I decided to share my project to he community [Libtorch-MobileSAM-Example](https://github.com/cyrillkuettel/Libtorch-MobileSAM-Example/tree/master). 
> > does this mean, if we have a different `orig_im_size`, we would have to re-export the model? > > ```python > > "orig_im_size": torch.tensor([1500, 2250], dtype=torch.float), > > ```...
I'm glad you find it useful. I went through a lot of pain creating these😅 Link to models [example-app/models/](https://github.com/cyrillkuettel/Libtorch-MobileSAM-Example/tree/master/example-app/models/)
Here is the solution for export to TorchScript: https://github.com/ChaoningZhang/MobileSAM/pull/112/files
I have written this https://github.com/cyrillkuettel/Libtorch-MobileSAM-Example/ based on Pytorch Mobile (Torchscript). Minimal example, though I might extend it in the future if there is interest.
I think you have to go the manual route. Find the the minimum and maximum x and y coordinates of the non-zero points in the mask.
@JisuHann This might be helpful https://github.com/facebookresearch/segment-anything/issues/16#issuecomment-1500016126
Can you share the `.ptl file` for Pytorch lite interpreter? I want to try this model out on my app.
After hours of searching, this fixed it for me (I still have no idea why) ``` pip install --upgrade mypy && pip install --upgrade --force-reinstall mypy_zope ``` Currently I'm using...