MetaGPT
MetaGPT copied to clipboard
using local models instead of openAI?
I am using GPT4All to pull various different LLM images, it can start a local server for API access ( https://docs.gpt4all.io/gpt4all_chat.html ) would it possible to customize to use that API instead , to go truly open source?
Yes, metagpt can do that. But your should modify some code to suit your LLM API.
I would be interested to know what code. I started by making the config use my mini_orca configuration. I started running and ran into this issue and am wondering how much is OpenAI specific and where to change what is. Maybe I am missing some config, but if I am needing code to write, a summary of the files needed to be replaced or rewritten with how to access that path would be great. Here is where I ran into an authentication after initially bypassing it by having my own model. (MetaGPT) D:\Github\MetaGPT>python startup.py "Write a cli snake game based on pygame" Found model file at d:\\models\\GPT4ALL\orca-mini-3b.ggmlv3.q4_0.bin llama.cpp: loading model from d:\\models\\GPT4ALL\orca-mini-3b.ggmlv3.q4_0.bin llama_model_load_internal: format = ggjt v3 (latest) llama_model_load_internal: n_vocab = 32000 llama_model_load_internal: n_ctx = 2048 llama_model_load_internal: n_embd = 3200 llama_model_load_internal: n_mult = 240 llama_model_load_internal: n_head = 32 llama_model_load_internal: n_layer = 26 llama_model_load_internal: n_rot = 100 llama_model_load_internal: ftype = 2 (mostly Q4_0) llama_model_load_internal: n_ff = 8640 llama_model_load_internal: n_parts = 1 llama_model_load_internal: model size = 3B llama_model_load_internal: ggml ctx size = 0.06 MB llama_model_load_internal: mem required = 2862.72 MB (+ 682.00 MB per state) llama_new_context_with_model: kv self size = 650.00 MB 2023-08-10 13:55:31.896 | INFO | metagpt.config:init:46 - Config loading done. 2023-08-10 13:55:35.595 | INFO | metagpt.software_company:invest:39 - Investment: $3.0. 2023-08-10 13:55:35.599 | INFO | metagpt.roles.role:_act:167 - Alice(Product Manager): ready to WritePRD Traceback (most recent call last): File "D:\Github\MetaGPT\lib\site-packages\tenacity-8.2.2-py3.10.egg\tenacity_asyncio.py", line 50, in call result = await fn(*args, **kwargs) File "D:\Github\MetaGPT\metagpt\actions\action.py", line 57, in aask_v1 content = await self.llm.aask(prompt, system_msgs) File "D:\Github\MetaGPT\metagpt\provider\base_gpt_api.py", line 44, in aask rsp = await self.acompletion_text(message, stream=True) File "D:\Github\MetaGPT\lib\site-packages\tenacity-8.2.2-py3.10.egg\tenacity_asyncio.py", line 88, in async_wrapped return await fn(*args, **kwargs) File "D:\Github\MetaGPT\lib\site-packages\tenacity-8.2.2-py3.10.egg\tenacity_asyncio.py", line 47, in call do = self.iter(retry_state=retry_state) File "D:\Github\MetaGPT\lib\site-packages\tenacity-8.2.2-py3.10.egg\tenacity_init.py", line 314, in iter return fut.result() File "C:\Users\bartl.pyenv\pyenv-win\versions\3.10.11\lib\concurrent\futures_base.py", line 451, in result return self.__get_result() File "C:\Users\bartl.pyenv\pyenv-win\versions\3.10.11\lib\concurrent\futures_base.py", line 403, in __get_result raise self._exception File "D:\Github\MetaGPT\lib\site-packages\tenacity-8.2.2-py3.10.egg\tenacity_asyncio.py", line 50, in call result = await fn(*args, **kwargs) File "D:\Github\MetaGPT\metagpt\provider\openai_api.py", line 222, in acompletion_text return await self._achat_completion_stream(messages) File "D:\Github\MetaGPT\metagpt\provider\openai_api.py", line 151, in _achat_completion_stream response = await openai.ChatCompletion.acreate(**self._cons_kwargs(messages), stream=True) File "D:\Github\MetaGPT\lib\site-packages\openai-0.27.8-py3.10.egg\openai\api_resources\chat_completion.py", line 45, in acreate return await super().acreate(*args, **kwargs) File "D:\Github\MetaGPT\lib\site-packages\openai-0.27.8-py3.10.egg\openai\api_resources\abstract\engine_api_resource.py", line 217, in acreate response, _, api_key = await requestor.arequest( File "D:\Github\MetaGPT\lib\site-packages\openai-0.27.8-py3.10.egg\openai\api_requestor.py", line 382, in arequest resp, got_stream = await self._interpret_async_response(result, stream) File "D:\Github\MetaGPT\lib\site-packages\openai-0.27.8-py3.10.egg\openai\api_requestor.py", line 726, in _interpret_async_response self._interpret_response_line( File "D:\Github\MetaGPT\lib\site-packages\openai-0.27.8-py3.10.egg\openai\api_requestor.py", line 763, in _interpret_response_line raise self.handle_error_response( openai.error.AuthenticationError: Incorrect API key provided: dummyval. You can find your API key at https://platform.openai.com/account/api-keys.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\Github\MetaGPT\startup.py", line 40, in
@geekan @stellaHSR
I started trying to use ChatGPT to use the base class and the configuration to add something in but I think it is beyond our ability. So I will have to wait for someone who is smarter to do it or I will need to be a bit smarter about the architecture to do it. I guess that's the same thing, really :)
openai.error.AuthenticationError: Incorrect API key provided: dummyval. You can find your API key at https://platform.openai.com/account/api-keys.
check your key
@bartanderson I am attempting the same thing. You'll need to change the server address for the api to your local host:
OPENAI_API_BASE: "http://127.0.0.1:8000/v1" #Use the port that your local gpt server gives you
OPENAI_API_KEY: "xxxxxxxxxxxxxxxxxxxxxxxxxx" #this is just a dummy value
This can be found in: MetaGPT/config/config.yaml
However, when I run it locally, I get the following error:
> python startup.py "Write a cli snake game based on pygame" --code_review True
2023-08-14 16:24:22.781 | INFO | metagpt.config:__init__:44 - Config loading done.
2023-08-14 16:24:23.819 | INFO | metagpt.software_company:invest:39 - Investment: $3.0.
2023-08-14 16:24:23.819 | INFO | metagpt.roles.role:_act:167 - Alice(Product Manager): ready to WritePRD
You are a Product Manager, named Alice, your goal is Efficiently create a successful product, and the constraint is .
You are a Product Manager, named Alice, your goal is Efficiently create a successful product, and the constraint is .
Traceback (most recent call last):
File "<redacted>/miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "<redacted>/MetaGPT/metagpt/actions/action.py", line 60, in _aask_v1
parsed_data = OutputParser.parse_data_with_mapping(content, output_data_mapping)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<redacted>/MetaGPT/metagpt/utils/common.py", line 121, in parse_data_with_mapping
block_dict = cls.parse_blocks(data)
^^^^^^^^^^^^^^^^^^^^^^
File "<redacted>/MetaGPT/metagpt/utils/common.py", line 43, in parse_blocks
block_title, block_content = block.split("\n", 1)
^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: not enough values to unpack (expected 2, got 1)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<redacted>/MetaGPT/startup.py", line 40, in <module>
fire.Fire(main)
File "<redacted>/miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<redacted>/miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 466, in _Fire
component, remaining_args = _CallAndUpdateTrace(
^^^^^^^^^^^^^^^^^^^^
File "<redacted>/miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "<redacted>/MetaGPT/startup.py", line 36, in main
asyncio.run(startup(idea, investment, n_round, code_review, run_tests))
File "<redacted>/miniconda3/envs/metagpt/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "<redacted>/miniconda3/envs/metagpt/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<redacted>/miniconda3/envs/metagpt/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "<redacted>/MetaGPT/startup.py", line 24, in startup
await company.run(n_round=n_round)
File "<redacted>/MetaGPT/metagpt/software_company.py", line 60, in run
await self.environment.run()
File "<redacted>/MetaGPT/metagpt/environment.py", line 67, in run
await asyncio.gather(*futures)
File "<redacted>/MetaGPT/metagpt/roles/role.py", line 240, in run
rsp = await self._react()
^^^^^^^^^^^^^^^^^^^
File "<redacted>/MetaGPT/metagpt/roles/role.py", line 209, in _react
return await self._act()
^^^^^^^^^^^^^^^^^
File "<redacted>/MetaGPT/metagpt/roles/role.py", line 168, in _act
response = await self._rc.todo.run(self._rc.important_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<redacted>/MetaGPT/metagpt/actions/write_prd.py", line 145, in run
prd = await self._aask_v1(prompt, "prd", OUTPUT_MAPPING)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<redacted>/miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "<redacted>/miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<redacted>/miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7f11aebbdc50 state=finished raised ValueError>]
I don't intend to use openai, does this have its own local ai that I missed?
On Mon, Aug 14, 2023, 3:31 PM Brian Toner @.***> wrote:
However, when I run it locally, I get the following error:
python startup.py "Write a cli snake game based on pygame" --code_review True 2023-08-14 16:24:22.781 | INFO | metagpt.config:init:44 - Config loading done. 2023-08-14 16:24:23.819 | INFO | metagpt.software_company:invest:39 - Investment: $3.0. 2023-08-14 16:24:23.819 | INFO | metagpt.roles.role:_act:167 - Alice(Product Manager): ready to WritePRD You are a Product Manager, named Alice, your goal is Efficiently create a successful product, and the constraint is . You are a Product Manager, named Alice, your goal is Efficiently create a successful product, and the constraint is . Traceback (most recent call last): File "
/miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in call result = await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File " /MetaGPT/metagpt/actions/action.py", line 60, in _aask_v1 parsed_data = OutputParser.parse_data_with_mapping(content, output_data_mapping) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File " /MetaGPT/metagpt/utils/common.py", line 121, in parse_data_with_mapping block_dict = cls.parse_blocks(data) ^^^^^^^^^^^^^^^^^^^^^^ File " /MetaGPT/metagpt/utils/common.py", line 43, in parse_blocks block_title, block_content = block.split("\n", 1) ^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: not enough values to unpack (expected 2, got 1) The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "
/MetaGPT/startup.py", line 40, in fire.Fire(main) File " /miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File " /miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 466, in _Fire component, remaining_args = _CallAndUpdateTrace( ^^^^^^^^^^^^^^^^^^^^ File " /miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^ File " /MetaGPT/startup.py", line 36, in main asyncio.run(startup(idea, investment, n_round, code_review, run_tests)) File " /miniconda3/envs/metagpt/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File " /miniconda3/envs/metagpt/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File " /miniconda3/envs/metagpt/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File " /MetaGPT/startup.py", line 24, in startup await company.run(n_round=n_round) File " /MetaGPT/metagpt/software_company.py", line 60, in run await self.environment.run() File " /MetaGPT/metagpt/environment.py", line 67, in run await asyncio.gather(*futures) File " /MetaGPT/metagpt/roles/role.py", line 240, in run rsp = await self._react() ^^^^^^^^^^^^^^^^^^^ File " /MetaGPT/metagpt/roles/role.py", line 209, in _react return await self._act() ^^^^^^^^^^^^^^^^^ File " /MetaGPT/metagpt/roles/role.py", line 168, in _act response = await self._rc.todo.run(self._rc.important_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File " /MetaGPT/metagpt/actions/write_prd.py", line 145, in run prd = await self._aask_v1(prompt, "prd", OUTPUT_MAPPING) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File " /miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped return await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File " /miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in call do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File " /miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/init.py", line 326, in iter raise retry_exc from fut.exception() tenacity.RetryError: RetryError[<Future at 0x7f11aebbdc50 state=finished raised ValueError>] β Reply to this email directly, view it on GitHub https://github.com/geekan/MetaGPT/issues/180#issuecomment-1678017048, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMXAKGKXC6KR7QJWZOT3FTXVKDK3ANCNFSM6AAAAAA3KIYTSI . You are receiving this because you were mentioned.Message ID: @.***>
@bartanderson You'll need to run your own local, OpenAI-like server running the llm of your choice. For instance, I am testing on llama-cpp-python's sever mode. In my case, my server is running on port 8000, on my localhost.
Yup, I have my eye on localai, which has a build problem currently I am trying to get help with and simpleai which may be too simple yet but who knows. So many projects to learn about and try. Did you do much other than expose it? The instructions I saw didn't say anything about how to interface to other than openai. I'd be interested in what you did.
On Tue, Aug 15, 2023 at 10:45β―AM Brian Toner @.***> wrote:
@bartanderson https://github.com/bartanderson You'll need to run your own local, OpenAI-like server running the llm of your choice. For instance, I am testing on llama-cpp-python's sever mode. In my case, my server is running on port 8000, on my localhost.
β Reply to this email directly, view it on GitHub https://github.com/geekan/MetaGPT/issues/180#issuecomment-1679168119, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMXAKEVUASJYZ4IMIARLDTXVOKQLANCNFSM6AAAAAA3KIYTSI . You are receiving this because you were mentioned.Message ID: @.***>
Hello, @dasam joining weeks late in the wagon, could you please help me understand if any locally running models are providing desired results?
As I see, @bartanderson & @brian-toner have been discussing on the same note. Bests.
@bartanderson I can hit my OpenAI-like server fine. I've been debugging MetaGPT to try and sort out where my issue is coming from. But unfortunately I don't have a lot of time. I'll probably have more time over the weekend to see if I can figure it out.
Here is an update on my progress. I can validate that MetaGPT does work on a local server. It seems the "stop" field in the request body can take on a null value. The OpenAI like api I am using (llama-cpp) doesn't handle nulls properly in that field. Upon fixing it on the server side, Metagpt seems to work. Though I did encounter another error:
2023-08-18 20:57:35.198 | INFO | metagpt.provider.openai_api:update_cost:81 - Total running cost: $0.073 | Max budget: $3.000 | Current cost: $0.047, prompt_tokens: 845, completion_tokens: 361
Traceback (most recent call last):
File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 53, in __call__
result = await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/<redacted>/MetaGPT/metagpt/actions/action.py", line 62, in _aask_v1
instruct_content = output_class(**parsed_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 8 validation errors for prd
Original Requirements
field required (type=value_error.missing)
Product Goals
field required (type=value_error.missing)
User Stories
field required (type=value_error.missing)
Competitive Quadrant Chart
field required (type=value_error.missing)
Requirement Analysis
field required (type=value_error.missing)
Requirement Pool
field required (type=value_error.missing)
UI Design draft
field required (type=value_error.missing)
Anything UNCLEAR
field required (type=value_error.missing)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/<redacted>/MetaGPT/startup.py", line 40, in <module>
fire.Fire(main)
File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 466, in _Fire
component, remaining_args = _CallAndUpdateTrace(
^^^^^^^^^^^^^^^^^^^^
File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^
File "/<redacted>/MetaGPT/startup.py", line 36, in main
asyncio.run(startup(idea, investment, n_round, code_review, run_tests))
File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/<redacted>/MetaGPT/startup.py", line 24, in startup
await company.run(n_round=n_round)
File "/<redacted>/MetaGPT/metagpt/software_company.py", line 60, in run
await self.environment.run()
File "/<redacted>/MetaGPT/metagpt/environment.py", line 67, in run
await asyncio.gather(*futures)
File "/<redacted>/MetaGPT/metagpt/roles/role.py", line 240, in run
rsp = await self._react()
^^^^^^^^^^^^^^^^^^^
File "/<redacted>/MetaGPT/metagpt/roles/role.py", line 209, in _react
return await self._act()
^^^^^^^^^^^^^^^^^
File "/<redacted>/MetaGPT/metagpt/roles/role.py", line 168, in _act
response = await self._rc.todo.run(self._rc.important_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/<redacted>/MetaGPT/metagpt/actions/write_prd.py", line 145, in run
prd = await self._aask_v1(prompt, "prd", OUTPUT_MAPPING)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 93, in async_wrapped
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7fbf25361510 state=finished raised ValidationError>]
This can be worked around by increasing the number of stop attempts in action.py (around line 49) changing
@retry(stop=stop_after_attempt(2), wait=wait_fixed(1))
to something like:
@retry(stop=stop_after_attempt(99), wait=wait_fixed(1))
Have you tried using llama2-wrapper Itβd let you use an OpenAI compatible API for your models.
Here is an update on my progress. I can validate that MetaGPT does work on a local server. It seems the "stop" field in the request body can take on a null value. The OpenAI like api I am using (llama-cpp) doesn't handle nulls properly in that field. Upon fixing it on the server side, Metagpt seems to work. Though I did encounter another error:
2023-08-18 20:57:35.198 | INFO | metagpt.provider.openai_api:update_cost:81 - Total running cost: $0.073 | Max budget: $3.000 | Current cost: $0.047, prompt_tokens: 845, completion_tokens: 361 Traceback (most recent call last): File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 53, in __call__ result = await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>/MetaGPT/metagpt/actions/action.py", line 62, in _aask_v1 instruct_content = output_class(**parsed_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 8 validation errors for prd Original Requirements field required (type=value_error.missing) Product Goals field required (type=value_error.missing) User Stories field required (type=value_error.missing) Competitive Quadrant Chart field required (type=value_error.missing) Requirement Analysis field required (type=value_error.missing) Requirement Pool field required (type=value_error.missing) UI Design draft field required (type=value_error.missing) Anything UNCLEAR field required (type=value_error.missing) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/<redacted>/MetaGPT/startup.py", line 40, in <module> fire.Fire(main) File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 466, in _Fire component, remaining_args = _CallAndUpdateTrace( ^^^^^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>/MetaGPT/startup.py", line 36, in main asyncio.run(startup(idea, investment, n_round, code_review, run_tests)) File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/<redacted>/MetaGPT/startup.py", line 24, in startup await company.run(n_round=n_round) File "/<redacted>/MetaGPT/metagpt/software_company.py", line 60, in run await self.environment.run() File "/<redacted>/MetaGPT/metagpt/environment.py", line 67, in run await asyncio.gather(*futures) File "/<redacted>/MetaGPT/metagpt/roles/role.py", line 240, in run rsp = await self._react() ^^^^^^^^^^^^^^^^^^^ File "/<redacted>/MetaGPT/metagpt/roles/role.py", line 209, in _react return await self._act() ^^^^^^^^^^^^^^^^^ File "/<redacted>/MetaGPT/metagpt/roles/role.py", line 168, in _act response = await self._rc.todo.run(self._rc.important_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>/MetaGPT/metagpt/actions/write_prd.py", line 145, in run prd = await self._aask_v1(prompt, "prd", OUTPUT_MAPPING) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 93, in async_wrapped return await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__ do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/__init__.py", line 326, in iter raise retry_exc from fut.exception() tenacity.RetryError: RetryError[<Future at 0x7fbf25361510 state=finished raised ValidationError>]
This can be worked around by increasing the number of stop attempts in action.py (around line 49) changing
@retry(stop=stop_after_attempt(2), wait=wait_fixed(1))
to something like:
@retry(stop=stop_after_attempt(99), wait=wait_fixed(1))
Do you have that fix in llamacpp python server? Or could you make a PR there?
Would run llama2-wrapper (and it does work) but it doesn't support cuBLAS so it ends up running the model on the CPU instead of the GPU
@TinaTiel unfortunately the code I used wasn't merged to main. The code I worked from was: https://raw.githubusercontent.com/ggerganov/llama.cpp/d8a8d0e536cfdaca0135f22d43fda80dc5e47cd8/examples/server/api_like_OAI.py
I found this linked on stack exchange, I think, explaining that there was a bug in the open-ai-like api, so I didn't work off of the main branch. I was hoping that it would get merged and then I would push up my changes, but it doesn't seem like that is going to happen.
From this obscure branch, I changed the line:
if(is_present(body, "stop")): postData["stop"] += body["stop"]
to
if(is_present(body, "stop") and body["stop"] is not None ): postData["stop"] += body["stop"]
@bartanderson You'll need to run your own local, OpenAI-like server running the llm of your choice. For instance, I am testing on llama-cpp-python's sever mode. In my case, my server is running on port 8000, on my localhost.
Some questions:
-
Are still using the LLAMA on meta GPT without any problem?
-
What did you just do? only change the link and API key?
-
is there any option to access LLAMA from remote API?
-
Are you using the actual LLAMA or fine-tuned code LLAMA?
I have local code llama running via FastChat OpenAI API and would love to use MetaGPT with it. Are there any other changes other than config and stop attempts that need to be made?
Here is an update on my progress. I can validate that MetaGPT does work on a local server. It seems the "stop" field in the request body can take on a null value. The OpenAI like api I am using (llama-cpp) doesn't handle nulls properly in that field. Upon fixing it on the server side, Metagpt seems to work. Though I did encounter another error:
2023-08-18 20:57:35.198 | INFO | metagpt.provider.openai_api:update_cost:81 - Total running cost: $0.073 | Max budget: $3.000 | Current cost: $0.047, prompt_tokens: 845, completion_tokens: 361 Traceback (most recent call last): File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 53, in __call__ result = await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>/MetaGPT/metagpt/actions/action.py", line 62, in _aask_v1 instruct_content = output_class(**parsed_data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 8 validation errors for prd Original Requirements field required (type=value_error.missing) Product Goals field required (type=value_error.missing) User Stories field required (type=value_error.missing) Competitive Quadrant Chart field required (type=value_error.missing) Requirement Analysis field required (type=value_error.missing) Requirement Pool field required (type=value_error.missing) UI Design draft field required (type=value_error.missing) Anything UNCLEAR field required (type=value_error.missing) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/<redacted>/MetaGPT/startup.py", line 40, in <module> fire.Fire(main) File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 466, in _Fire component, remaining_args = _CallAndUpdateTrace( ^^^^^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>/MetaGPT/startup.py", line 36, in main asyncio.run(startup(idea, investment, n_round, code_review, run_tests)) File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/asyncio/runners.py", line 190, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/<redacted>/MetaGPT/startup.py", line 24, in startup await company.run(n_round=n_round) File "/<redacted>/MetaGPT/metagpt/software_company.py", line 60, in run await self.environment.run() File "/<redacted>/MetaGPT/metagpt/environment.py", line 67, in run await asyncio.gather(*futures) File "/<redacted>/MetaGPT/metagpt/roles/role.py", line 240, in run rsp = await self._react() ^^^^^^^^^^^^^^^^^^^ File "/<redacted>/MetaGPT/metagpt/roles/role.py", line 209, in _react return await self._act() ^^^^^^^^^^^^^^^^^ File "/<redacted>/MetaGPT/metagpt/roles/role.py", line 168, in _act response = await self._rc.todo.run(self._rc.important_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>/MetaGPT/metagpt/actions/write_prd.py", line 145, in run prd = await self._aask_v1(prompt, "prd", OUTPUT_MAPPING) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 93, in async_wrapped return await fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__ do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/<redacted>miniconda3/envs/metagpt/lib/python3.11/site-packages/tenacity/__init__.py", line 326, in iter raise retry_exc from fut.exception() tenacity.RetryError: RetryError[<Future at 0x7fbf25361510 state=finished raised ValidationError>]
This can be worked around by increasing the number of stop attempts in action.py (around line 49) changing
@retry(stop=stop_after_attempt(2), wait=wait_fixed(1))
to something like:
@retry(stop=stop_after_attempt(99), wait=wait_fixed(1))
Do you have that fix in llamacpp python server? Or could you make a PR there?
Would run llama2-wrapper (and it does work) but it doesn't support cuBLAS so it ends up running the model on the CPU instead of the GPU
I have a docker image that uses the Intel CLBLAST llama.cpp to host an open ai model. You may be able to mod it to build llama.cpp with cuBlas.
https://github.com/itlackey/ipex-arc-fastchat