jatinchowdhury18
jatinchowdhury18
@fcaspe Awesome! I'm going to go ahead and merge this PR.
Yeah, thanks for bringing this up, I hadn't considered the inter-operation of scratch-memory and the thread-pool extension. I think the simplest solution would be to have the plugin request the...
> The scratch is a thread local pointer, so if you retrieve it from the process call, you'll get a pointer that you can share with all the jobs. If...
Thanks for making this PR! The Python changes look good... I'm debating whether to merge this PR before or after we add some tests for it. I'll have a look...
@victorzappi I've gone ahead and added tests for the output of the PyTorch exporter (see [here](https://github.com/jatinchowdhury18/RTNeural/commit/0fdb589609a562c50432cb4348028ac9a8576d9c)). It's working well for the most part, but would it be possible to add...
That's correct, at the moment RTNeural does not have support for that layer type. The reasoning is more just that I haven't yet had a need for it, and haven't...
The [`RTNeural-compare`](https://github.com/jatinchowdhury18/RTNeural-compare) repository has some performance comparisons between RTNeural, libtorch, TensorFlow Lite, and the ONNX runtime, mostly broken down by layer type and size. Comparisons with additional inference engines would...