MetaGPT icon indicating copy to clipboard operation
MetaGPT copied to clipboard

如何去指定加载config文件?如何配置不同action的LLM配置?

Open moseshu opened this issue 9 months ago • 5 comments

我想要去指定加载自己的config文件,不同的agent采用不同的模型配置?有没有相关的文档?metagpt --init-config 的方式非常不灵活,

moseshu avatar Mar 04 '25 12:03 moseshu

class Action(SerializationMixin, ContextMixin, BaseModel):
    model_config = ConfigDict(arbitrary_types_allowed=True)

    name: str = ""
    i_context: Union[
        dict, CodingContext, CodeSummarizeContext, TestingContext, RunCodeContext, CodePlanAndChangeContext, str, None
    ] = ""
    prefix: str = ""  # aask*时会加上prefix,作为system_message
    desc: str = ""  # for skill manager
    node: ActionNode = Field(default=None, exclude=True)
    # The model name or API type of LLM of the `models` in the `config2.yaml`;
    #   Using `None` to use the `llm` configuration in the `config2.yaml`.
    llm_name_or_type: Optional[str] = None

    @model_validator(mode="after")
    @classmethod
    def _update_private_llm(cls, data: Any) -> Any:
        config = ModelsConfig.default().get(data.llm_name_or_type)
        if config:
            llm = create_llm_instance(config)
            llm.cost_manager = data.llm.cost_manager
            data.llm = llm
        return data

Action类提供了llm_name_or_type属性,可以从config2.yamlmodels里加载对应的模型配置。 举个栗子: config2.yamlmodels配置如下:

llm:
  api_type: "ollama"  # or azure / ollama / groq etc.
  model: "llama3.2"  # or gpt-3.5-turbo
  base_url: "http://localhost:11434/api"  # or forward url / other llm url
  api_key: "YOUR_API_KEYa"

models:
  "gpt35": # alias or model name
    api_type: "openai"  # or azure / ollama / groq etc.
    base_url: "YOUR_BASE_URL"
    api_key: "YOUR_API_KEY"
    proxy: "YOUR_PROXY"  # for LLM API requests
    model: "gpt-3.5-turbo"
    # timeout: 600 # Optional. If set to 0, default value is 300.
    # Details: https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/
    pricing_plan: "" # Optional. Use for Azure LLM when its model name is not the same as OpenAI's
  "llama3.3": # alias or model name
    api_type: "ollama"  # or azure / ollama / groq etc.
    base_url: "YOUR_BASE_URL"
    api_key: "YOUR_API_KEY"
    proxy: "YOUR_PROXY"  # for LLM API requests
    # timeout: 600 # Optional. If set to 0, default value is 300.
    # Details: https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/
    pricing_plan: "" # Optional. Use for Azure LLM when its model name is not the same as OpenAI's

Action的对象创建时指定模型名:

action1 = Action(llm_name_or_type="gpt35")
action2 = Action(llm_name_or_type="llama3.3")

参阅config2.example.yaml

iorisa avatar Mar 04 '25 13:03 iorisa

class Action(SerializationMixin, ContextMixin, BaseModel): model_config = ConfigDict(arbitrary_types_allowed=True)

name: str = ""
i_context: Union[
    dict, CodingContext, CodeSummarizeContext, TestingContext, RunCodeContext, CodePlanAndChangeContext, str, None
] = ""
prefix: str = ""  # aask*时会加上prefix,作为system_message
desc: str = ""  # for skill manager
node: ActionNode = Field(default=None, exclude=True)
# The model name or API type of LLM of the `models` in the `config2.yaml`;
#   Using `None` to use the `llm` configuration in the `config2.yaml`.
llm_name_or_type: Optional[str] = None

@model_validator(mode="after")
@classmethod
def _update_private_llm(cls, data: Any) -> Any:
    config = ModelsConfig.default().get(data.llm_name_or_type)
    if config:
        llm = create_llm_instance(config)
        llm.cost_manager = data.llm.cost_manager
        data.llm = llm
    return data

Action类提供了llm_name_or_type属性,可以从config2.yamlmodels里加载对应的模型配置。 举个栗子: config2.yamlmodels配置如下:

llm: api_type: "ollama" # or azure / ollama / groq etc. model: "llama3.2" # or gpt-3.5-turbo base_url: "http://localhost:11434/api" # or forward url / other llm url api_key: "YOUR_API_KEYa"

models: "gpt35": # alias or model name api_type: "openai" # or azure / ollama / groq etc. base_url: "YOUR_BASE_URL" api_key: "YOUR_API_KEY" proxy: "YOUR_PROXY" # for LLM API requests model: "gpt-3.5-turbo" # timeout: 600 # Optional. If set to 0, default value is 300. # Details: https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/ pricing_plan: "" # Optional. Use for Azure LLM when its model name is not the same as OpenAI's "llama3.3": # alias or model name api_type: "ollama" # or azure / ollama / groq etc. base_url: "YOUR_BASE_URL" api_key: "YOUR_API_KEY" proxy: "YOUR_PROXY" # for LLM API requests # timeout: 600 # Optional. If set to 0, default value is 300. # Details: https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/ pricing_plan: "" # Optional. Use for Azure LLM when its model name is not the same as OpenAI's 在Action的对象创建时指定模型名:

action1 = Action(llm_name_or_type="gpt35") action2 = Action(llm_name_or_type="llama3.3") 参阅config2.example.yaml

如果不想通过配置的文件的方式呢?我想直接在代码里指定配置,比如action1 = Action(llm_config={"api_key":"","base_url":""}) config2.yaml 文件可以指定自定义的目录么?

moseshu avatar Mar 04 '25 13:03 moseshu

我通过下面的方式也可以跑通代码,不知道这种做法会不会存在其他问题?为啥import Action或者Role的时候会去读~/.metagpt/config2.yaml文件呢?

import re
from metagpt.roles import Role
from metagpt.configs.llm_config import LLMConfig
from metagpt.config2 import Config
from metagpt.schema import Message

class CustomAction(Action):
    
   
    PROMPT_TEMPLATE: str = """
    Write a python function that can {instruction} and provide two runnnable test cases.
    Return ```python your_code_here ``` with NO other texts,
    your code:
    """
        
    async def run(self, instruction:str):
        # 实现你的action逻辑
        prompt = self.PROMPT_TEMPLATE.format(instruction=instruction)

        rsp = await self._aask(prompt)
        code_text = self.parse_code(rsp)
        return code_text
    @staticmethod
    def parse_code(rsp):
        pattern = r"```python(.*)```"
        match = re.search(pattern, rsp, re.DOTALL)
        code_text = match.group(1) if match else rsp
        return code_text

llm_config = {
    "api_type": "openai",  
    "model": "gpt-4",
    "api_key": api_key,
    "base_url":base_url
}

2. 使用Config.from_llm_config方法创建配置
config = Config.from_llm_config(llm_config)

class CustomRole(Role):
    def __init__(self, **kwargs):
        # 3. 使用配置初始化Role
        super().__init__(
            **kwargs
        )
        self.set_actions([CustomAction])
    async def _act(self):
        # 实现你的role逻辑
        todo = self.rc.todo  # todo will be SimpleWriteCode()

        msg = self.get_memories(k=1)[0]  # find the most recent messages
        code_text = await todo.run(msg.content)
        msg = Message(content=code_text, role=self.profile, cause_by=type(todo))

        return msg
    

import asyncio

from metagpt.context import Context

async def main():
    msg = "write a function that calculates the sum of a list"
    context = Context()
    role = CustomRole(context=context,config=config)

    result = await role.run(msg)
    print("result====",result)

await main()

moseshu avatar Mar 04 '25 16:03 moseshu

This issue has no activity in the past 30 days. Please comment on the issue if you have anything to add.

github-actions[bot] avatar Apr 04 '25 00:04 github-actions[bot]

模型配置能够配置Temperature等参数吗?

Choogle-Ma avatar Apr 15 '25 10:04 Choogle-Ma

模型配置能够配置Temperature等参数吗?

yes, refs to https://github.com/FoundationAgents/MetaGPT/blob/main/metagpt/configs/llm_config.py

better629 avatar May 16 '25 13:05 better629