Results 29 comments of dav-ell

Thanks very much for your work on this @FidgetySo. Does this PR work yet? How much more is left to go?

Any progress on this? If exporting quantized pytorch models to ONNX is not supported, is there a preferred route? i.e. TensorRT?

Upsample causing issues for me too, documenting here: https://github.com/NVIDIA/Torch-TensorRT/issues/961

> I've successfully convert it to TensorRT version for inference. Kindly check it here: https://github.com/kongyanye/EfficientDet-TensorRT Amazing work! Thanks very much for sharing. I noticed, though, that you didn't include `fold_constants=True`...

@zylo117 are you planning on making further commits to this repo, or should someone fork this and apply updates elsewhere?

I'm also looking for a solution to this problem in this repo. Small objects are completely ignored for me, so far.

Attempting to make a repro script but stuck on a different issue now... ```python import torch_tensorrt torch_tensorrt.logging.set_reportable_log_level(torch_tensorrt.logging.Level.Graph) print(torch_tensorrt.__version__) import torch import torch.nn as nn from torch.nn import functional as F...

Apologies, I forgot `model.eval()`. After doing that, it just hangs...

Correction, it has a segmentation fault. See attached logs. Code is in temp.py.txt. [logs_graph.txt](https://github.com/NVIDIA/Torch-TensorRT/files/8420282/logs_graph.txt) [logs_debug.txt](https://github.com/NVIDIA/Torch-TensorRT/files/8420283/logs_debug.txt) [temp.py.txt](https://github.com/NVIDIA/Torch-TensorRT/files/8420284/temp.py.txt)