How to integrate with my private LLM model
I'm new to this product. and i scan the document and find that we can config ~/.metagpt/config2.yaml to match a list of populate llm model, such as gpt, claude and so on.
My team have trained a private llm model and deployed in local machine. How to config in project to talk with my model? Is there a ny doc about that?
You can refer to https://docs.deepwisdom.ai/main/en/guide/get_started/configuration/llm_api_configuration.html#ollama-api
same question
The issue has been fixed. I create a new wrapper for my llm.
BTW, i'm stuck with another issue: https://github.com/geekan/MetaGPT/issues/1390 (which cannot receive message from other role). Any one help?
Closing this issue due to prolonged inactivity. If there's any further information or assistance needed, feel free to reopen the issue or create a new one. Thanks!