kinference icon indicating copy to clipboard operation
kinference copied to clipboard

Example which uses a tokeniser?

Open hakanai opened this issue 6 months ago • 3 comments

I have been studying the Python demo code for llama.onnx, found here: https://github.com/tpoisonooo/llama.onnx/blob/main/demo_llama.py#L184

I have looked through all the examples we currently have for kinference, but nothing is doing tokenisation yet. You would sort of expect an example like POSTagger to be doing tokenisation, but it seems to skip the hard part and load the end result directly in as the input.(Unless I'm misreading the code?)

How do I go from a string prompt, into an ONNXData object that would be accepted by this model?

hakanai avatar Dec 12 '23 22:12 hakanai

You are correct, KInference expects you to do all the input data preprocessing yourself (e.g. tokenization), as it is an inference-only library. So in order to get ONNXData you have to implement your own tokenizer that converts input string to NDArray and then use .asTensor(name) on it. Unfortunately, we don't have any plans to add built-in tokenization yet.

AnastasiaTuchina avatar Dec 13 '23 05:12 AnastasiaTuchina

Tokenisation is something I can do, my biggest stumbling block is not knowing the structure of the data I have to provide. If I provide nothing then it throws an error saying that "input" is missing, so that's currently the best hint I have to work with. It would be super nice if it threw some detailed error about the shape of the data it expected to be fed in.

hakanai avatar Dec 13 '23 10:12 hakanai

We don't have shape analyzers in KInference, but I can suggest using Netron app. It shows correct input names and shapes when possible

AnastasiaTuchina avatar Dec 13 '23 11:12 AnastasiaTuchina