Fschoeller
Fschoeller
I'm using TensorRT 7.1.3 on the Jetson Xavier AGX. Would you like the models as ONNX files?
Did you find a solution to this? Mine doesnt even start training within 24 hr
Unfortunately, that is not the issue, the model should be deployment name e.g `interpreter --model azure/` it works fine as long as it does not have to run anything. Stuff...
Not sure if it has to do with function calling support. I tried with gpt4 turbo and it was the same result.
If you install from github and not pip it seems to work