kinference icon indicating copy to clipboard operation
kinference copied to clipboard

Running ONNX models in vanilla Kotlin

Results 7 kinference issues
Sort by recently updated
recently updated
newest added

Hello - I am trying to load a MobileNet model using KIEngine.loadModel(model) - I get Unsupported Operation error and I could see QuantizeLinear is not supported yet from KIOperatorFactory May...

Needs docs around how to use, features, performance, limitations, and comparisons with other products [such as Tribuo from Oracle]

help wanted

Thanks for the initiative! It seems to look for a specific **patched** version of onnx `Could not find com.microsoft.onnxruntime:onnxruntime:1.13.1.patched` Specifying an explicit version will find it but I'm not sure...

I have been studying the Python demo code for llama.onnx, found here: https://github.com/tpoisonooo/llama.onnx/blob/main/demo_llama.py#L184 I have looked through all the examples we currently have for kinference, but nothing is doing tokenisation...

Per https://github.com/onnx/onnx/blob/main/docs/Changelog.md#Softmax-13 Context: These changes allow me to run OpenCLIP models with kinference. Potentially unaccounted differences: * I am low confidence, but I think the difference with the Axis attribute...

primary use case: inference of ML models on end-user devices, for example, with neural coprocessors like M1 has