MetaGPT
MetaGPT copied to clipboard
When open_llm(vllm) is used in config2.yaml, the api_key seems to be ignored.
Bug description
Now vllm deployment supports setting the api-key:
python -m vllm.entrypoints.openai.api_server --model meta-llama/Llama-2-7b-hf --dtype float32 --api-key token-abc123
However, the following configuration file will result in an error:
llm:
api_type: open_llm
base_url: 'http://127.0.0.1:8000/v1'
model: 'meta-llama/Llama-2-7b-hf'
api_key: token-abc123
Setting api_type to openai will allow it to function correctly, but it will show a warning: Warning: model not found. Using cl100k_base encoding.
I suspect that the api_key is being ignored.
Bug solved method
Setting api_type to openai works, but shows warinings.
Environment information
- LLM type and model name:
- System version:
- Python version: 3.9.18
- packages version: metagpt 0.7.6
- installation method: pip install metagpt
Screenshots or logs
vllm token check need to be verified.
the warning won't affect your usage, and you can run python3 examples/llm_hello_word.py to check the api.
-
About "
api_keyis being ignored": MetaGPT only checks if theapi_keyis not empty and not equal to"YOUR_API_KEY". Whether theapi_keyis actually valid is determined by the LLM service when invoking the LLM. -
About "
Warning: model not found. Using cl100k_base encoding.":Warning: model not found. Using cl100k_base encoding.is printed during calculate cost. It appears because there is not corresponding token counter for the model. You can avoid this log message by configuringllm.pricing_plan:
llm:
api_type: "openai" # or azure / ollama / open_llm etc. Check LLMType for more options
base_url: "YOUR_BASE_URL"
api_key: "YOUR_API_KEY"
model: "gpt-4-turbo-preview" # or gpt-3.5-turbo-1106 / gpt-4-1106-preview
proxy: "YOUR_PROXY" # for LLM API requests
# timeout: 600 # Optional. If set to 0, default value is 300.
pricing_plan: "" # Optional. If invalid, it will be automatically filled in with the value of the `model`.
# Azure-exclusive pricing plan mappings:
# - gpt-3.5-turbo 4k: "gpt-3.5-turbo-1106"
# - gpt-4-turbo: "gpt-4-turbo-preview"
# - gpt-4-turbo-vision: "gpt-4-vision-preview"
# - gpt-4 8k: "gpt-4"
# See for more: https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/
- About
openaiworks butopen_llmfailed: They are bothOpenAILLMobjects, so they should be the same.
@register_provider([LLMType.OPENAI, LLMType.FIREWORKS, LLMType.OPEN_LLM, LLMType.MOONSHOT, LLMType.MISTRAL, LLMType.YI])
class OpenAILLM(BaseLLM):
"""Check https://platform.openai.com/examples for examples"""
def __init__(self, config: LLMConfig):
self.config = config
self._init_client()
self.auto_max_tokens = False
self.cost_manager: Optional[CostManager] = None
- About "
api_keyis being ignored": MetaGPT only checks if theapi_keyis not empty and not equal to"YOUR_API_KEY". Whether theapi_keyis actually valid is determined by the LLM service when invoking the LLM.- About "
Warning: model not found. Using cl100k_base encoding.":Warning: model not found. Using cl100k_base encoding.is printed during calculate cost. It appears because there is not corresponding token counter for the model. You can avoid this log message by configuringllm.pricing_plan:llm: api_type: "openai" # or azure / ollama / open_llm etc. Check LLMType for more options base_url: "YOUR_BASE_URL" api_key: "YOUR_API_KEY" model: "gpt-4-turbo-preview" # or gpt-3.5-turbo-1106 / gpt-4-1106-preview proxy: "YOUR_PROXY" # for LLM API requests # timeout: 600 # Optional. If set to 0, default value is 300. pricing_plan: "" # Optional. If invalid, it will be automatically filled in with the value of the `model`. # Azure-exclusive pricing plan mappings: # - gpt-3.5-turbo 4k: "gpt-3.5-turbo-1106" # - gpt-4-turbo: "gpt-4-turbo-preview" # - gpt-4-turbo-vision: "gpt-4-vision-preview" # - gpt-4 8k: "gpt-4" # See for more: https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/
- About
openaiworks butopen_llmfailed: They are bothOpenAILLMobjects, so they should be the same.@register_provider([LLMType.OPENAI, LLMType.FIREWORKS, LLMType.OPEN_LLM, LLMType.MOONSHOT, LLMType.MISTRAL, LLMType.YI]) class OpenAILLM(BaseLLM): """Check https://platform.openai.com/examples for examples""" def __init__(self, config: LLMConfig): self.config = config self._init_client() self.auto_max_tokens = False self.cost_manager: Optional[CostManager] = None
oh i solved it , in v0.7-release metagpt/provider/open_llm_api.py line 22
kwargs = dict(api_key="sk-xxx", base_url=self.config.base_url)
edit it to
kwargs = dict(api_key=self.config.api_key, base_url=self.config.base_url)
then it works
LGTM, you can submit a PR directly. The code on main is similar.
LGTM, you can submit a PR directly. The code on main is similar.
I cloned the repo and pip intall -e . in main, there is no problem. But package from pip install metagpt is similar to v7-release, not main.
Really? The code on main branch looks the same. It doesn't use the api_key.
def _make_client_kwargs(self) -> dict:
kwargs = dict(api_key="sk-xxx", base_url=self.config.open_llm_api_base)
return kwargs
File open_llm_api.py seems to have been removed in main and merged into openai_api.py, as iorisa quoted above
@Alkacid LGTM. I discovered that I did have an older version of the file open, and you're right. Thank you very much for your kind reply.