Wojtek Turowicz
Wojtek Turowicz
I don't mind as long as its easy to set up
I meant the other way round. Load up a TRT model in ML.NET and infer on data.
Model format: TensorRT by NVIDIA
Load it in C#, run inference. This requires C# externs for TensorRT C runtime
@hwvs when I do your approach I get an error in C#: `TensorFlow.TFException: Unparseable ConfigProto` This is how I generate my array: ``` import binascii import tensorflow as tf config...
OK my problem was that `gpuConfig` was automatically declared as `int[]` instead of `byte[]`.
@rbgreenway this is great!
I can confirm @rbgreenway's solution works well. I wrapped it in a `SetGpuRatio` method.
He're is an on-prem provider written by me, that is using stateful grains as queues: https://github.com/Surveily/Orleans.Streaming.Grains
I just installed kernel 6.2 and the microphone started working.