crewAI
crewAI copied to clipboard
Add support for Gemini 2.5 Pro Experimental model
Fixes #2511
This PR adds support for the new Gemini 2.5 Pro Experimental model (gemini-2.5-pro-exp-03-25) to CrewAI.
Changes:
- Added the model to the supported Gemini models list in constants.py
- Added context window size for the model in llm.py
- Added tests to verify the implementation works correctly
Link to Devin run: https://app.devin.ai/sessions/b249d998df3c4c5980123559c5eaaa73 Requested by: Joe Moura ([email protected])
🤖 Devin AI Engineer
I'll be helping with this pull request! Here's what you should know:
✅ I will automatically:
- Address comments on this PR. Add "(aside)" to your comment to have me ignore it.
- Look at CI failures and help fix them
Note: I can only respond to comments from users who have write access to this repository.
⚙️ Control Options:
- [ ] Disable automatic comment and CI monitoring
Disclaimer: This review was made by a crew of AI Agents.
Code Review Comment for PR #2512 - Gemini 2.5 Support
Overview
This PR introduces support for the Gemini 2.5 model, making necessary updates to the constants, context window sizes, and adding relevant test cases. The changes appear well-structured and consistent with the existing codebase. Below are some specific code improvements, historical insights, and additional suggestions to enhance maintainability and clarity.
1. src/crewai/cli/constants.py
Code Improvements
- Comment for Clarity: It's beneficial to add a comment indicating that the new model is an experimental release. For example:
"gemini": [ # Experimental models "gemini/gemini-2.5-pro-exp-03-25", # Experimental release - March 2025 "gemini/gemini-1.5-flash", # ... ],
This provides immediate context for developers regarding the model’s current status.
2. src/crewai/llm.py
Code Improvements
- Use of Constants: To enhance code readability, consider defining constants for context window sizes like so:
# Common context window sizes PRO_CONTEXT_SIZE = 2097152 FLASH_CONTEXT_SIZE = 1048576 context_window_sizes = { "gemini-2.5-pro-exp-03-25": PRO_CONTEXT_SIZE, "gemini-2.0-flash": FLASH_CONTEXT_SIZE, "gemini-1.5-pro": PRO_CONTEXT_SIZE, # ... }
This would streamline any future updates and make the code easier to maintain.
3. tests/test_gemini_2_5_support.py
Code Improvements
- Expanded Test Cases: While the existing tests are a good start, consider adding more comprehensive testing to cover various scenarios:
def test_gemini_2_5_invalid_model_name(): """Test handling of invalid model name variations.""" with pytest.raises(ValueError): LLM(model="gemini-2.5-wrong") def test_gemini_2_5_model_parameters(): """Test model initialization with various parameters.""" llm = LLM(model="gemini/gemini-2.5-pro-exp-03-25", temperature=0.7, max_tokens=1000) assert llm.model == "gemini/gemini-2.5-pro-exp-03-25" assert llm.temperature == 0.7 assert llm.max_tokens == 1000
This ensures the robustness of the implementation and handles erroneous states effectively.
General Recommendations
-
Documentation Updates: It’s crucial to add documentation regarding the new Gemini 2.5 model to the README. Details about its features and limitations can facilitate better understanding for users and developers alike.
-
Version Compatibility: Including checks for version compatibility can prevent potential runtime issues with outdated dependencies. Specifying minimum version requirements could be beneficial.
-
Error Handling: Implement meaningful error messages for any issues specific to the Gemini 2.5 model and validate parameters effectively to provide clearer feedback.
Security Considerations
No security concerns have been identified in the current changes, and the model validation seems reliably implemented.
Performance Impact
The context window size set for the model is deemed appropriate based on existing standards, with no performance issues apparent from the changes made.
By focusing on these suggestions, the PR can not only enhance functionality but also ensure that the code remains maintainable, understandable, and user-friendly. Thank you for the effort in implementing support for the Gemini 2.5 model, and I look forward to seeing these adjustments made to strengthen the contribution further.
Closing due to inactivity for more than 7 days.
Hey @lucasgomide , just wondering if you'll continue working on this PR? I can take it over if needed
Hey @lucasgomide , just wondering if you'll continue working on this PR? I can take it over if needed
Hey, good call. It was done last week in another PR.
We can install the latest crewai version and test yourself