instructor icon indicating copy to clipboard operation
instructor copied to clipboard

Add support for Gemini API

Open AmgadHasan opened this issue 6 months ago • 15 comments

The new Gemini api introduced support for function calling. You define a set of functions with their expected arguments and you pass them in the tools argument.

Can we add gemini support to instructor so it can be used with gemini pro models and later gemini ultra?

AmgadHasan avatar Feb 17 '24 14:02 AmgadHasan

can you link docs and a more complete example?

jxnl avatar Feb 17 '24 21:02 jxnl

can you link docs and a more complete example?

@jxnl Sure. Just as a heads up, there are two versions of the gemini API:

  • The google cloud version: This one is a service offered on gcp and requires creating a project and service account. This is similar to Azure OpenAI
  • The google AI Dev version: This is targeted towards individual developers. You sign up and create an API hey and use it when calling the API. It's similar to the normal OpenAI API. This is the one that I used.

Here are the docs for function calling usinf the Google ai dev api

https://ai.google.dev/tutorials/function_calling_python_quickstart

A quick example:

calculator = glm.Tool(
    function_declarations=[
      glm.FunctionDeclaration(
        name='add',
        description="Returns the sum of two numbers.",
        parameters=glm.Schema(
            type=glm.Type.OBJECT,
            properties={
                'a': glm.Schema(type=glm.Type.NUMBER),
                'b': glm.Schema(type=glm.Type.NUMBER)
            },
            required=['a','b']
        )
      ),
      glm.FunctionDeclaration(
        name='multiply',
        description="Returns the product of two numbers.",
        parameters=glm.Schema(
            type=glm.Type.OBJECT,
            properties={
                'a':glm.Schema(type=glm.Type.NUMBER),
                'b':glm.Schema(type=glm.Type.NUMBER)
            },
            required=['a','b']
        )
      )
    ])
model = genai.GenerativeModel('gemini-pro', tools=[calculator])
chat = model.start_chat()

response = chat.send_message(
    f"What's {a} X {b} ?",
) 
response.candidates
[index: 0
content {
  parts {
    function_call {
      name: "multiply"
      args {
        fields {
          key: "b"
          value {
            number_value: 234234
          }
        }
        fields {
          key: "a"
          value {
            number_value: 2312371
          }
        }
      }
    }
  }
  role: "model"
}
finish_reason: STOP
] 

AmgadHasan avatar Feb 18 '24 20:02 AmgadHasan

lol WHY WOULD THEY DO THIS

jxnl avatar Feb 18 '24 20:02 jxnl

lol WHY WOULD THEY DO THIS

I think they want indie developers to hack with thei googleai dev api while also offering their gcp to their enterprise customers.

Just like openai and azure

AmgadHasan avatar Feb 18 '24 20:02 AmgadHasan

@jxnl

So is there any hope for this?

AmgadHasan avatar Feb 19 '24 16:02 AmgadHasan

Def hope!

The nuance is that the api is slightly different .chat() and .send_message()

jxnl avatar Feb 19 '24 16:02 jxnl

So I want to think about how to give a good matching experience.

jxnl avatar Feb 19 '24 16:02 jxnl

Yes, it's definitely annoying. You have to specify the tools when you create the model object, not when you actually send the prompt.

Like why even do this? LLMs are fucking stateless; no point in defining the functions when creating the model object. Unless they're using something completely different from OpenAI.

AmgadHasan avatar Feb 19 '24 16:02 AmgadHasan

Okay so after taking a second deep dive in their not-so-great docs, I think there's a different way for doing this:

# Create a client
from google.ai import generativelanguage as glm

client = glm.GenerativeServiceClient(
    client_options={'api_key': GOOGLE_API_KEY})

# Create function tools
my_tool = glm.Tool(
    function_declarations=[
      glm.FunctionDeclaration(
        name='add',
        description="Returns the sum of two numbers.",
        parameters=glm.Schema(
            type=glm.Type.OBJECT,
            properties={
                'a': glm.Schema(type=glm.Type.NUMBER),
                'b': glm.Schema(type=glm.Type.NUMBER)
            },
            required=['a','b']
        )
      ),
      glm.FunctionDeclaration(
        name='multiply',
        description="Returns the product of two numbers.",
        parameters=glm.Schema(
            type=glm.Type.OBJECT,
            properties={
                'a':glm.Schema(type=glm.Type.NUMBER),
                'b':glm.Schema(type=glm.Type.NUMBER)
            },
            required=['a','b']
        )
      )
    ])

request = {
    "model": 'models/gemini-1.0-pro-001',
    "contents": [{"parts": [{"text": "Send an email to my friend Oliver wishing them a happt birthday"}], "role": "user"}],
    "tools": [my_tool],

}
response = client.generate_content(request=request)

This is somewhat similar to the conventional way of doing function calling using OpenAI's client. @jxnl

AmgadHasan avatar Feb 19 '24 17:02 AmgadHasan

@AmgadHasan I maintain https://github.com/braintrustdata/braintrust-proxy which allows you to access gemini models through the OpenAI format. We haven't yet translated the gemini tool call syntax over, but based on your code snippets, my guess is that it is just sending json-schema and should be easy to do.

Want to collaborate on that? Then, you could just set OPENAI_BASE_URL to "https://braintrustproxy.com/v1" and it'll work out of the box with instructor

ankrgyl avatar Feb 19 '24 18:02 ankrgyl

what is the current state on this? happy to contribute!

davhin avatar Mar 25 '24 16:03 davhin

o work done would love a contrib

jxnl avatar Mar 25 '24 23:03 jxnl

Hi!

What's the update on this? :)

oliverbj avatar Apr 10 '24 11:04 oliverbj

Hi!

What's the update on this? :)

I won't be working on this anytime soon, so it'll likley have to be from another contributor unless you go tough litellm or braintrust proxy.

jxnl avatar Apr 10 '24 13:04 jxnl