Pedro Cuenca

Results 331 comments of Pedro Cuenca

cc @tomaarsen, not sure if this refers to the blog or the documentation.

Would be curious to hear your thoughts @ZachNagengast, and ideas on how to move forward.

I think this was finally handled by @Vaibhavs10 in #1434 (sorry, I did not remember about this PR when the other one was opened)

Hello, this is Pedro from Hugging Face. I've been trying today to verify the tool calling template that is in use for the Llama 3.2 models. My approach was to...

Happy to submit the PR with the workaround above or dig deeper.

My instinct would be to remove the lru cache, as it reduces complexity overall. For additional context, I'm still not sure _why_ this problem is affecting the `gguf-my-repo`. I think...

I did a quick test, this is what I saw: * The memory profile is much better behaved on macOS. This is what it looks like running Stable Diffusion: The...

Feel free to merge then, @sergiopaniego!

API looks good. A couple of things I saw while testing some of the stuff I do during releases with the Python CLI: - The repo had to exist for...