llama-stack-apps
llama-stack-apps copied to clipboard
Getting the proper template for function calling
What is the definitive answer for how to format my prompts properly for function calling with llama3.1?
Seeing a lot of conflicting information across Twitter, the official Meta docs, Huggingface docs, and the docs from numerous inference providers.
I'm using a vLLM server and I want to create a lightweight prompt library from scratch to make sure there are no errors. Thank you!