gorilla icon indicating copy to clipboard operation
gorilla copied to clipboard

Gorilla: Training and Evaluating LLMs for Function Calls (Tool Calls)

Results 169 gorilla issues
Sort by recently updated
recently updated
newest added

### Discussed in https://github.com/ShishirPatil/gorilla/discussions/401 Originally posted by **bp117** May 1, 2024 Hi, @ShishirPatil I ran the raft.py on the pdf document using gpt-3.5 turbo model and it generated the output...

Hi, I am trying to use Gorilla with the snippets below (extrapolated from the example in Gorilla colab, after migration for Openai Version: 1.16.2): # Import Chat completion template and...

hosted-gorilla

The OG Gorilla paper talks about multiple steps listed below. 1. API Collection 2. Hand-generate Instruction-API pairs 3. Choose Instruction-API pairs 4. Generate Instruction-API pairs corpus 5. Convert Instruction-API pairs...

Fixes #393 **Changes** - OpenAI and AzureOpenAI clients env vars can now be overriden with `COMPLETION_` prefixed env vars - OpenAI and AzureOpenAI Langchain embedding clients env vars can now...

The format of the answers of the fine tuned model makes it hard to read using an out of the box LLM dialog system: Question: how many states in the...

enhancement

#381 added support to build Azure or OpenAI clients based on environment variables and can therefore be used to configure any OpenAI v1 compatible model, except in the case of...

enhancement

Follow-up to #382 to add support for running the `eval.py` script using Azure OpenAI deployments.

enhancement

This PR adds support for a GitHub Codespaces Dev Container for the `raft` sub-project: - Dev Container config `RAFT` with direnv with shell hooks - Adds `Open in GitHub Codespaces`...

**Describe the bug** How to deploy such an API locally: http://luigi.millennium.berkeley.edu:8000/v1 vllm can be deployed, but does not support function call.

hosted-openfunctions-v2

This PR provides a web application for fetching API calls from a API documentation urls into a specified format (Option 1), to easily generate training points to train a fine-tuned...