thomas-beznik
thomas-beznik
Hello! I'm having a similar issue... I followed your suggestion and was able to add the metaheader, and thus passed this check in the `readFile` function. Unfortunately, it crashes at...
Thank you for your answer > I don't think the device used by the pytorch export has any relevant to how ORT runs the model so I wouldn't worry about...
> Since you have a Conv-heavy fp16 model and a card that supports tensor core operations, can you try this simple one-line update to your script - > > https://onnxruntime.ai/docs/performance/tune-performance.html#convolution-heavy-models-and-the-cuda-ep....
> > I get the attached graph as output when running the optimisations. The weird thing is that the optimised model is even slower: I go from 350ms to 690ms...
Hello @wschin ! Thanks for all your suggestions, I'll try them as soon as I can, as I've been getting some errors due to the installation of `ORTModule` (similar to...
> @thomas-beznik, sure thing. If #9754 is the blocker, you probably need to [build ORT from source](https://tomwildenhain-microsoft.github.io/onnxruntime/docs/build/training.html). Note that you need a clean machine to avoid dependency interference for a...
> Could you run `nsys profile` with your model w/wo onnxruntime? It was easy to me to identify which part is the performance bottleneck when I have profiling result. For...
Hello, Thank you for your answer! Do you confirm that it is indeed possible to use this class using itk-js? Could you also give me a lead on how this...
Thank you for this explanation! I will experiment with this and come back to you.
I was able to include the measurements in the screenshot by changing the `generateImage()` function from the ScreenshotDialog to this: ```` function generateImage() { const img = new Image(); const...