adk-go
adk-go copied to clipboard
create openAi adapter
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
View this failed invocation of the CLA check for more information.
For the most up to date status, view the checks section at the bottom of the pull request.
Summary of Changes
Hello @babasha, I'm Gemini Code Assist[^1]! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly extends the Google ADK-Go framework by integrating an OpenAI-compatible adapter. This enhancement allows developers to build AI agents that can seamlessly utilize a wide range of Large Language Models, from local setups like LM Studio and Ollama to the official OpenAI API. The adapter supports advanced agent capabilities such as multi-turn conversations with tool use, real-time streaming, and robust error handling, making ADK-Go more versatile and accessible for modern AI applications.
Highlights
- OpenAI Adapter Integration: Introduces a new OpenAI-compatible adapter for the Google ADK-Go framework, enabling interaction with OpenAI API, LM Studio, Ollama, and other compatible endpoints.
- Multi-turn Tool Calling: Implements full conversation flow with tool execution and conversation history management, including session TTL cleanup.
- Streaming Responses: Adds support for Server-Sent Events (SSE) for real-time output from LLMs.
- Robust Error Handling: Incorporates exponential backoff, rate limiting, and retry logic for API calls, along with JSON argument sanitization and iteration guards to prevent infinite loops.
- Comprehensive Example: Provides a
weather_agentexample demonstrating the adapter's usage with a mock tool, configurable for various LLM providers. - Repository Setup Guidance: Includes detailed instructions for GitHub repository configuration, such as description, topics, and release notes for an initial v0.1.0 release.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in pull request comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with :thumbsup: and :thumbsdown: on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
[^1]: Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.
Fixed / Addressed in latest commit