Andrea Bonvini

Results 15 comments of Andrea Bonvini

The answer is: I can't install TorchTensorRT because Windows isn't supported. However, I can't even optimize my model by running the NGC docker image from WSL2 (as explained here #856)....

Hi @narendasan, thanks for the response. Yes exactly, I'd like to build the docker image and optimize a PyTorch model on a Windows 10 notebook with either a Laptop NVIDIA...

Unfortunately, I'm facing the same problem even using TensorRT 8.2.5.1.

Ok, I tried to change my `CMakeLists.txt` file in: ``` cmake_minimum_required (VERSION 3.8) project(example-app) find_package(Torch REQUIRED) find_package(torchtrt REQUIRED) add_executable(example-app create_trt_module.cpp) target_link_libraries(example-app PRIVATE torch "-Wl,--no-as-needed" torchtrt_runtime "-Wl,--no.as-needed") ``` And it fails...

Oh right, I added on VS the install option as a build command argument into the CMakeSettings for x64-Release And it built an install directory with everything needed by cmake...

As an additional note, I was able to optimize the model on the same PC by using WSL. Then I tried to create a super trivial executable that just loaded...

Update: by catching the exception as a ` torch::jit::ErrorReport` ``` try { model = torch::jit::load(model_path, device); model.eval(); std::cout

Update: By manually loading the .dlls the script does work! This of course isn't a proper solution and should be properly adressed in #1058 (maybe it would be beneficial to...

Hi @noman-anjum-retro, can you share your CMakeLists.txt, your code, and the linking errors you're receiving?