Rick Lamers
Rick Lamers
Would be nice! Could you PR?
This is amazing! This will definitely benefit the project, especially because new contributors will find it easier to reason about the implementation without having too much historical context (the inevitable...
For Groq I'd recommend using function calls, here's a modified example that seems to work well: ```python from pydantic import BaseModel from devtools import debug from groq import Groq client...
Awesome, thanks!
@ashwinb FYI in HF's chat template yet another prompt is used: https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/blob/8c22764a7e3675c50d4c7c9a4edb474456022b16/tokenizer_config.json#L2053 Is that wrong? Should it follow the one in this repo?
No worries, as long as we know the correct system prompt (this repo) we can all adjust to converge to the same correct version. Any updates on parallel calls?
I've put out a note for them https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct/discussions/90
Would data.run output automatically be added to the messages such that the LLM knows about the outcome of the tool call?