Guido Enrique
Guido Enrique
Hey guys, i am trying to run the Mistral 7b model using the [guide](https://docs.mistral.ai/self-deployment/vllm/) on the page. I am running: ```bash docker run --gpus all \ -e HF_TOKEN=$HF_TOKEN -p 8000:8000...
Model: `llama3` LangchaingoVersion: `v0.1.10` I was trying to use the `llms.WithMaxLength` and `llms.WithMinLength` to set some output limits, but seems like the model doesn't respect these options. ```go callOptions =...
I have noticed that the use of "**functions**" is now deprecated and was replaced for "**tools**". Will be pretty nice to update the examples dir in the repo adding examples...