android-demo-app icon indicating copy to clipboard operation
android-demo-app copied to clipboard

loading model from torchhub do not work

Open batrlatom opened this issue 4 years ago • 3 comments

Hi, could you please check this error? It could be a problem with the hub server.

    model = torch.hub.load('pytorch/vision:v0.9.0', 'deeplabv3_resnet50', pretrained=True)
  File "/opt/conda/lib/python3.8/site-packages/torch/hub.py", line 362, in load
    repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, verbose)
  File "/opt/conda/lib/python3.8/site-packages/torch/hub.py", line 162, in _get_cache_or_reload
    _validate_not_a_forked_repo(repo_owner, repo_name, branch)
  File "/opt/conda/lib/python3.8/site-packages/torch/hub.py", line 124, in _validate_not_a_forked_repo
    with urlopen(url) as r:
  File "/opt/conda/lib/python3.8/urllib/request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "/opt/conda/lib/python3.8/urllib/request.py", line 531, in open
    response = meth(req, response)
  File "/opt/conda/lib/python3.8/urllib/request.py", line 640, in http_response
    response = self.parent.error(
  File "/opt/conda/lib/python3.8/urllib/request.py", line 569, in error
    return self._call_chain(*args)
  File "/opt/conda/lib/python3.8/urllib/request.py", line 502, in _call_chain
    result = func(*args)
  File "/opt/conda/lib/python3.8/urllib/request.py", line 649, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: rate limit exceeded

code I use is

import torch
from torch.utils.mobile_optimizer import optimize_for_mobile

model = torch.hub.load('pytorch/vision:v0.9.0', 'deeplabv3_resnet50', pretrained=True)
model.eval()

scripted_module = torch.jit.script(model)
optimized_scripted_module = optimize_for_mobile(scripted_module)

# Export full jit version model (not compatible with lite interpreter)
scripted_module.save("deeplabv3_scripted.pt")
# Export lite interpreter version model (compatible with lite interpreter)
scripted_module._save_for_lite_interpreter("deeplabv3_scripted.ptl")
# using optimized lite interpreter model makes inference about 60% faster than the non-optimized lite interpreter model, which is about 6% faster than the non-optimized full jit model
optimized_scripted_module._save_for_lite_interpreter("deeplabv3_scripted_optimized.ptl")

batrlatom avatar Jul 03 '21 12:07 batrlatom

I have same issue...

Josonlchui avatar Jul 07 '21 04:07 Josonlchui

Just tested the script above with torch 1.9.0 and torchvision 0.10.0 installed on Mac and it works - what's your installed version of torch and torchvision and OS? Do they look the same as:

pip list|grep torch
torch                              1.9.0
torchvision                        0.10.0

jeffxtang avatar Jul 09 '21 20:07 jeffxtang

I used docker nvcr.io/nvidia/pytorch:21.06-py3 for this. The versions are

torch 1.9.0a0+c3d40fd torchtext 0.10.0a0 torchvision 0.10.0a0

batrlatom avatar Jul 12 '21 10:07 batrlatom