inference icon indicating copy to clipboard operation
inference copied to clipboard

Resnet50-v1.5 pytorch model

Open corehalt opened this issue 4 years ago • 6 comments

Hi, we can see that there are ONNX (.onnx) and TensorFlow (.pb) versions of the Resnet50-v1.5 model but a version for PyTorch (.pt) seems to be missing in the supported models table even though the Pytorch framework is specified in that table.

There will be a PyTorch version for the Resnet50-v1.5 (int8 and fp32) model available soon? Thank you.

@christ1ne

corehalt avatar Feb 10 '21 02:02 corehalt

@christ1ne This also a related question, can we assume that any provided PyTorch fp32 model will be already quantizable in the sense of being compatible with Pytorch post-training quantization?

corehalt avatar Feb 10 '21 03:02 corehalt

WG: PT is no longer using the ONNX for the internal graph format. We need to have an updated PT model in this case.

christ1ne avatar Feb 16 '21 17:02 christ1ne

WG: we may add a PT model next week. Please see the mailing list for updates next week.

christ1ne avatar Feb 23 '21 17:02 christ1ne

@christ1ne @guschmue Please find here attached the links for the Resnet-50 v1.5 PyTorch models:

Resnet-50 v1.5 FP32 model: https://zenodo.org/record/4588417/ Resnet-50 v1.5 quantized INT8 and calibrated model: https://zenodo.org/record/4589637/

More details in the descriptions of each record uploaded to Zenodo.

corehalt avatar Mar 09 '21 03:03 corehalt

Can you add it to the readme?

guschmue avatar Mar 09 '21 15:03 guschmue

Hi,

I am getting this error using the linked Resnet-50 Pytorch model

./run_local.sh pytorch resnet50 cpu
INFO:main:Namespace(accuracy=False, audit_conf='audit.config', backend='pytorch', cache=0, count=None, data_format=None, dataset='imagenet', dataset_list=None, dataset_path='/mlperf/data/imagenet/', debug=False, find_peak_performance=False, inputs=['image'], max_batchsize=32, max_latency=None, mlperf_conf='../../mlperf.conf', model='mlperf/models/resnet50-19c8e357.pth', model_name='resnet50', output='/mlperf/inference/vision/classification_and_detection/output/pytorch-cpu/resnet50', outputs=['ArgMax:0'], performance_sample_count=None, profile='resnet50-pytorch', qps=None, samples_per_query=None, scenario='SingleStream', threads=96, time=None, user_conf='user.conf')
INFO:root:Failed to import cuda module: No module named 'caffe2.python.caffe2_pybind11_state_gpu'
INFO:root:Failed to import AMD hip module: No module named 'caffe2.python.caffe2_pybind11_state_hip'
WARNING:root:This caffe2 python run does not have GPU support. Will run in CPU only mode.
Traceback (most recent call last):
  File "python/main.py", line 571, in <module>
    main()
  File "python/main.py", line 424, in main
    backend = get_backend(args.backend)
  File "python/main.py", line 253, in get_backend
    from backend_pytorch import BackendPytorch
  File "/mlperf/inference/vision/classification_and_detection/python/backend_pytorch.py", line 10, in <module>
    import caffe2.python.onnx.backend
  File "/miniconda3/envs/mlperf/lib/python3.7/site-packages/caffe2/python/onnx/backend.py", line 37, in <module>
    import onnx.optimizer
ModuleNotFoundError: No module named 'onnx.optimizer'

Optimizers were removed from ONNX in version 1.9.0 (https://github.com/onnx/onnx/pull/3288)

I tried using the last ONNX version with optimizers (1.8.1) but got this error

  File "python/main.py", line 571, in <module>
    main()
  File "python/main.py", line 446, in main
    model = backend.load(args.model, inputs=args.inputs, outputs=args.outputs)
  File "/mlperf/inference/vision/classification_and_detection/python/backend_pytorch.py", line 34, in load
    self.model = onnx.load(model_path)
  File "/miniconda3/envs/mlperf/lib/python3.7/site-packages/onnx/__init__.py", line 119, in load_model
    model = load_model_from_string(s, format=format)
  File "/miniconda3/envs/mlperf/lib/python3.7/site-packages/onnx/__init__.py", line 156, in load_model_from_string
    return _deserialize(s, ModelProto())
  File "/miniconda3/envs/mlperf/lib/python3.7/site-packages/onnx/__init__.py", line 97, in _deserialize
    decoded = cast(Optional[int], proto.ParseFromString(s))
google.protobuf.message.DecodeError: Error parsing message with type 'onnx.ModelProto'

Deschain avatar Jan 10 '22 11:01 Deschain

I have same question @Deschain.Have you solve the problem?

handicraftsmanthk avatar Oct 27 '22 08:10 handicraftsmanthk

I suppose the reference implementation is not compatible with the resnet50 pytorch model (fp32). @pgmpablo157321 can you please confirm?

arjunsuresh avatar Nov 19 '22 10:11 arjunsuresh

outdated

mrasquinha-g avatar May 23 '23 10:05 mrasquinha-g