Todd
Todd
Still looking for an update. Something is definitely wonky with the converter for LSTM. I had to go back to the multi-backened version of Keras with tensorflow 1.14 to get...
@gustavla > Are you prioritizing low latency, high accuracy, short training time, or a small model size? In your experience, why is SSD or Faster R-CNN the answer to this?...
Any update on this? It's basically a show stopper at the moment and I can't find a workaround.
Thanks, will try this out in next few days.
Sorry haven't had a chance to try this out yet. Been swamped, working my way to it. @rogday that must be an issue with ONNX converter then. The model only...
I might be interested, but I probably won't be free for a few weeks to a month for other priorities.
Any update on this? I am running into the same issue. LoRA runs correctly with transformers, but when I convert to llama cpp. It gives me non sense output.