camel
camel copied to clipboard
feat: integrate the llama3 (8B, 70B) served by Groq
Description
Describe your changes in detail.
Motivation and Context
Types of changes
What types of changes does your code introduce? Put an x
in all the boxes that apply:
- [x] New feature (non-breaking change which adds core functionality)
- [x] Example (update in the folder of example)
[!IMPORTANT]
Review skipped
Auto reviews are disabled on this repository.
Please check the settings in the CodeRabbit UI or the
.coderabbit.yaml
file in this repository. To trigger a single review, invoke the@coderabbitai review
command.You can disable this status message by setting the
reviews.review_status
tofalse
in the CodeRabbit configuration file.
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
Tips
Chat
There are 3 ways to chat with CodeRabbit:
- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
-
I pushed a fix in commit <commit_id>.
-
Generate unit testing code for this file.
-
Open a follow-up GitHub issue for this discussion.
-
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitai
in a new review comment at the desired location with your query. Examples:-
@coderabbitai generate unit testing code for this file.
-
@coderabbitai modularize this function.
-
- PR comments: Tag
@coderabbitai
in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:-
@coderabbitai generate interesting stats about this repository and render them as a table.
-
@coderabbitai show all the console.log statements in this repository.
-
@coderabbitai read src/utils.ts and generate unit testing code.
-
@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
-
@coderabbitai help me debug CodeRabbit configuration file.
-
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (invoked as PR comments)
-
@coderabbitai pause
to pause the reviews on a PR. -
@coderabbitai resume
to resume the paused reviews. -
@coderabbitai review
to trigger an incremental review. This is useful when automatic reviews are disabled for the repository. -
@coderabbitai full review
to do a full review from scratch and review all the files again. -
@coderabbitai summary
to regenerate the summary of the PR. -
@coderabbitai resolve
resolve all the CodeRabbit review comments. -
@coderabbitai configuration
to show the current CodeRabbit configuration for the repository. -
@coderabbitai help
to get help.
Additionally, you can add @coderabbitai ignore
anywhere in the PR description to prevent this PR from being reviewed.
CodeRabbit Configuration File (.coderabbit.yaml
)
- You can programmatically configure CodeRabbit by adding a
.coderabbit.yaml
file to the root of your repository. - Please see the configuration documentation for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation:
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
@Wendong-Fan Hi, could you please help me add GROQ_API_KEY
to the GitHub secret (I will DM you the secrete key value)? Thanks a lot in advance.
@camel-ai/camel-maintainers @Wendong-Fan hi guys, this pr is fixed finally, could you please review it? thanks
@Wendong-Fan I have seen an error in pytest
FAILED test/models/test_groq_llama3_model.py::test_groq_llama3_model[ModelType.GROQ_LLAMA_3_8_B] - ValueError: Invalid `model_path` (meta-llama/Meta-Llama-3-8B-Instruct) is provided. Tokenizer loading failed.
FAILED test/models/test_groq_llama3_model.py::test_groq_llama3_model[ModelType.GROQ_LLAMA_3_70_B] - ValueError: Invalid `model_path` (meta-llama/Meta-Llama-3-70B-Instruct) is provided. Tokenizer loading failed.
It means the model can not download the tokenizer from huggingface, we need to add HUGGING_FACE_HUB_TOKEN
. Could you please help me to apply one and add it to the env? thanks!
token counter access issue fixed
Thank you, I will fix the conflits.
Hey @Appointat , did some update in https://github.com/camel-ai/camel/pull/531/commits/407b44e85fe1afb40ddcfb3575b19f198f6ea2f9
please review the change, I used OpenAITokenCounter as default token counter since it's more easy for user set it up even it's not accurate for open source models, now we also allow user to switch the token counter when initilizing the model, let me know WDYT~
@Wendong-Fan I have checked the code, but I cannot review the code. I think it is ok to be merged. Thank you.