Tal Wald

Results 4 comments of Tal Wald

@AngledLuffa We saw that if you run the models directly on onnx runtime or TensorRT that you get a 2X-4X improvement in the inference runtime - which is critical for...

@michaelfeil I agree that the issue you mentioned is a blocker - however, by removing the dependency on torch you will enable people to choose what framework to use without...

I'll add that I can't find any official docker files for use on docker hub ☹