TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

🐛 [Bug] RuntimeError 'bad optional access' on quantization notebook

Open ivan94fi opened this issue 3 years ago • 0 comments

Bug Description

There is an error when using the official quantization notebook in this repository, with the official Nvidia docker image for Pytorch, version 22.05.

To Reproduce

Steps to reproduce the behavior:

  1. Install the missing components in the container (tensorboard, tqdm, torchvision, pytorch-quantization)
  2. Start the jupyter notebook and execute the steps, including training
  3. Try to compile the scripted model to TensorRT with this line (this code is already part of the notebook, it is not added by me):
compile_spec = {"inputs": [torch_tensorrt.Input([16, 3, 32, 32])],
                "enabled_precisions": torch.int8,
                }
trt_mod = torch_tensorrt.compile(qat_model, **compile_spec)

The torch_tensorrt.compile call fails with this stacktrace:

RuntimeError                              Traceback (most recent call last)
Input In [18], in <cell line: 6>()
      1 qat_model = torch.jit.load("trained_vgg16_qat.jit.pt").eval()
      3 compile_spec = {"inputs": [torch_tensorrt.Input([16, 3, 32, 32])],
      4                 "enabled_precisions": torch.int8,
      5                 }
----> 6 trt_mod = torch_tensorrt.compile(qat_model, **compile_spec)
      8 test_loss, test_acc = test(trt_mod, testing_dataloader, crit, 0)
      9 print("VGG QAT accuracy using TensorRT: {:.2f}%".format(100 * test_acc))

File /usr/local/lib/python3.8/dist-packages/torch_tensorrt/_compile.py:109, in compile(module, ir, inputs, enabled_precisions, **kwargs)
    104         logging.log(
    105             logging.Level.Info,
    106             "Module was provided as a torch.nn.Module, trying to script the module with torch.jit.script. In the event of a failure please preconvert your module to TorchScript"
    107         )
    108         ts_mod = torch.jit.script(module)
--> 109     return torch_tensorrt.ts.compile(ts_mod, inputs=inputs, enabled_precisions=enabled_precisions, **kwargs)
    110 elif target_ir == _IRType.fx:
    111     from torch_tensorrt.fx.tracer.acc_tracer import acc_tracer

File /usr/local/lib/python3.8/dist-packages/torch_tensorrt/ts/_compiler.py:113, in compile(module, inputs, device, disable_tf32, sparse_weights, enabled_precisions, refit, debug, capability, num_min_timing_iters, num_avg_timing_iters, workspace_size, calibrator, truncate_long_and_double, require_full_compilation, min_block_size, torch_executed_ops, torch_executed_modules)
     87     raise ValueError(
     88         "require_full_compilation is enabled however the list of modules and ops to run in torch is not empty. Found: torch_executed_ops: "
     89         + torch_executed_ops + ", torch_executed_modules: " + torch_executed_modules)
     91 spec = {
     92     "inputs": inputs,
     93     "device": device,
   (...)
    110     }
    111 }
--> 113 compiled_cpp_mod = _C.compile_graph(module._c, _parse_compile_spec(spec))
    114 compiled_module = torch.jit._recursive.wrap_cpp_module(compiled_cpp_mod)
    115 return compiled_module

RuntimeError: bad optional access

Expected behavior

The VGG16 scripted model should be converted to TensorRT as per the notebook instructions.

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

NOTE: torch_tensorrt and pytorch with these versions are already installed in the docker image

  • Torch-TensorRT Version (e.g. 1.0.0): 1.2.0a0+666a2637
  • PyTorch Version (e.g. 1.0): 1.11.0+cu102
  • CPU Architecture: amd64
  • OS (e.g., Linux): Linux
  • How you installed PyTorch (conda, pip, libtorch, source): preinstalled in the Nvidia Docker container from NGC
  • Build command you used (if compiling from source): --
  • Are you using local sources or building from archives: --
  • Python version: 3.8.10
  • CUDA version: 11.0
  • GPU models and configuration: GeForce RTX 2080 Ti
  • Any other relevant information:

ivan94fi avatar Jun 20 '22 07:06 ivan94fi