MetaGPT icon indicating copy to clipboard operation
MetaGPT copied to clipboard

用gemini作为LLM,运行数据解析器中的data_visualization.py示例报错

Open snowmini opened this issue 3 months ago • 3 comments

Traceback (most recent call last): File "g:\metagpt\metagpt-main\metagpt\utils\common.py", line 638, in wrapper return await func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\metagpt\metagpt-main\metagpt\roles\role.py", line 556, in run rsp = await self.react() ^^^^^^^^^^^^^^^^^^ File "g:\metagpt\metagpt-main\metagpt\roles\role.py", line 527, in react rsp = await self._plan_and_act() ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\metagpt\metagpt-main\metagpt\roles\di\data_interpreter.py", line 89, in _plan_and_act rsp = await super()._plan_and_act() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\metagpt\metagpt-main\metagpt\roles\role.py", line 495, in _plan_and_act task_result = await self._act_on_task(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\metagpt\metagpt-main\metagpt\roles\di\data_interpreter.py", line 95, in _act_on_task code, result, is_success = await self._write_and_exec_code() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\metagpt\metagpt-main\metagpt\roles\di\data_interpreter.py", line 121, in _write_and_exec_code code, cause_by = await self._write_code(counter, plan_status, tool_info) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\metagpt\metagpt-main\metagpt\roles\di\data_interpreter.py", line 154, in write_code code = await todo.run( ^^^^^^^^^^^^^^^ File "g:\metagpt\metagpt-main\metagpt\actions\di\write_analysis_code.py", line 59, in run rsp = await self.llm.aask(context, system_msgs=[INTERPRETER_SYSTEM_MSG], **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\metagpt\metagpt-main\metagpt\provider\base_llm.py", line 127, in aask rsp = await self.acompletion_text(message, stream=stream, timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\MetaGPT\pyenv\Lib\site-packages\tenacity_asyncio.py", line 88, in async_wrapped return await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\MetaGPT\pyenv\Lib\site-packages\tenacity_asyncio.py", line 47, in call do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\MetaGPT\pyenv\Lib\site-packages\tenacity_init.py", line 314, in iter return fut.result() ^^^^^^^^^^^^ File "G:\python\Lib\concurrent\futures_base.py", line 449, in result return self.__get_result() ^^^^^^^^^^^^^^^^^^^ File "G:\python\Lib\concurrent\futures_base.py", line 401, in __get_result raise self._exception File "g:\MetaGPT\pyenv\Lib\site-packages\tenacity_asyncio.py", line 50, in call result = await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\metagpt\metagpt-main\metagpt\provider\base_llm.py", line 175, in acompletion_text return await self._achat_completion_stream(messages, timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\metagpt\metagpt-main\metagpt\provider\google_gemini_api.py", line 101, in _achat_completion_stream resp: AsyncGenerateContentResponse = await self.llm.generate_content_async( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\MetaGPT\pyenv\Lib\site-packages\google\generativeai\generative_models.py", line 261, in generate_content_async request = self._prepare_request( ^^^^^^^^^^^^^^^^^^^^^^ File "g:\MetaGPT\pyenv\Lib\site-packages\google\generativeai\generative_models.py", line 204, in _prepare_request contents = content_types.to_contents(contents) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\MetaGPT\pyenv\Lib\site-packages\google\generativeai\types\content_types.py", line 232, in to_contents contents = [strict_to_content(c) for c in contents] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "g:\MetaGPT\pyenv\Lib\site-packages\google\generativeai\types\content_types.py", line 232, in contents = [strict_to_content(c) for c in contents] ^^^^^^^^^^^^^^^^^^^^ File "g:\MetaGPT\pyenv\Lib\site-packages\google\generativeai\types\content_types.py", line 210, in strict_to_content content = _convert_dict(content) ^^^^^^^^^^^^^^^^^^^^^^ File "g:\MetaGPT\pyenv\Lib\site-packages\google\generativeai\types\content_types.py", line 107, in _convert_dict raise KeyError( KeyError: "Could not recognize the intended type of the dict. A Content should have a 'parts' key. A Part should have a 'inline_data' or a 'text' key. A Blob should have 'mime_type' and 'data' keys. Got keys: ['role', 'content']"

snowmini avatar Mar 15 '24 15:03 snowmini

I dug into this a little. I think the root cause is because process_message() in common.py doesn't seem to be aware of Gemini's unique conversation format

    def _user_msg(self, msg: str, images: Optional[Union[str, list[str]]] = None) -> dict[str, str]:
        # Not to change BaseLLM default functions but update with Gemini's conversation format.
        # You should follow the format.
        return {"role": "user", "parts": [msg]}

not entirely too sure what the right fix is (maybe moving process_message as a method in BaseLLM and have gemini override it? Gonna try that :D)

geohotstan avatar Mar 16 '24 11:03 geohotstan

Excellent! According to aaskprocess_message should return a str.

    async def aask(
        self,
        msg: Union[str, list[dict[str, str]]],
        system_msgs: Optional[list[str]] = None,
        format_msgs: Optional[list[dict[str, str]]] = None,
        images: Optional[Union[str, list[str]]] = None,
        timeout=3,
        stream=True,
    ) -> str:
        if system_msgs:
            message = self._system_msgs(system_msgs)
        else:
            message = [self._default_system_msg()]
        if not self.use_system_prompt:
            message = []
        if format_msgs:
            message.extend(format_msgs)
        if isinstance(msg, str):
            message.append(self._user_msg(msg, images=images))
        else:
            message.extend(msg)
        logger.debug(message)
        rsp = await self.acompletion_text(message, stream=stream, timeout=timeout)
        return rsp

iorisa avatar Mar 19 '24 07:03 iorisa

Merged. Could you plz check if it's done?

geekan avatar Mar 21 '24 03:03 geekan