pranavm-nvidia

Results 29 comments of pranavm-nvidia

Could you explain why you need to modify the ONNX file? You should be able to do, e.g. `--trt-outputs ... ` and similarly for `--onnx-outputs`.

It allows you to resume from wherever `polygraphy debug` left off. e.g. you can do `polygraphy debug reduce --load-debug-replay polygraphy_debug_replay.json ...` and it would skip ahead to the last iteration...

That's a little tricky to do because the intended usage of `debug reduce` was to iteratively fix bugs. That is, you would run `debug reduce` to create a minimal reproducer,...

The outputs of the engine can't be changed once it's built, so you'd need to mark whichever outputs you need while building. After that, you could do: ``` polygraphy run...

Yes, exactly

You'd need to mark the outputs you want to compare when you build the engine. As I mentioned, there's no way to retrieve values for tensors which weren't marked as...

You can load it like so: ```py from polygraphy.json import load_json inputs = load_json("onnx_inputs.json") ``` then, `inputs["array"]` should be a NumPy array. Another option is to use: ```bash polygraphy inspect...

@azad96 Here's a short Python-based example I had written a while ago. The same idea should apply with the C++ API: ```py #!/usr/bin/env python3 # Generation Command: polygraphy template trt-network...

Which version of TensorRT is this? Is it possible you have a mismatch between the headers and libraries? The first error in your screenshot suggests that TRT doesn't recognize the...

Could be an issue with CUDA/cuDNN installation in WSL. Do you see the same behavior if you install the Windows packages instead?