MetaGPT icon indicating copy to clipboard operation
MetaGPT copied to clipboard

Requirement Pool Error

Open tijmen opened this issue 2 years ago • 5 comments
trafficstars

I'm getting the following error. I'm running in the default docker and haven't made any changes to the codebase.

## Anything UNCLEAR:
There are no unclear points.
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/tenacity/_asyncio.py", line 50, in __call__
    result = await fn(*args, **kwargs)
  File "/app/metagpt/metagpt/actions/action.py", line 62, in _aask_v1
    instruct_content = output_class(**parsed_data)
  File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 5 validation errors for prd
Requirement Pool -> 0
  value is not a valid tuple (type=type_error.tuple)
Requirement Pool -> 1
  value is not a valid tuple (type=type_error.tuple)
Requirement Pool -> 2
  value is not a valid tuple (type=type_error.tuple)
Requirement Pool -> 3
  value is not a valid tuple (type=type_error.tuple)
Requirement Pool -> 4
  value is not a valid tuple (type=type_error.tuple)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/metagpt/startup.py", line 36, in <module>
    fire.Fire(main)
  File "/usr/local/lib/python3.9/site-packages/fire/core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/usr/local/lib/python3.9/site-packages/fire/core.py", line 466, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/usr/local/lib/python3.9/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/app/metagpt/startup.py", line 32, in main
    asyncio.run(startup(idea, investment, n_round, code_review))
  File "/usr/local/lib/python3.9/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/local/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
    return future.result()
  File "/app/metagpt/startup.py", line 20, in startup
    await company.run(n_round=n_round)
  File "/app/metagpt/metagpt/software_company.py", line 60, in run
    await self.environment.run()
  File "/app/metagpt/metagpt/environment.py", line 56, in run
    await asyncio.gather(*futures)
  File "/app/metagpt/metagpt/roles/role.py", line 239, in run
    rsp = await self._react()
  File "/app/metagpt/metagpt/roles/role.py", line 208, in _react
    return await self._act()
  File "/app/metagpt/metagpt/roles/role.py", line 167, in _act
    response = await self._rc.todo.run(self._rc.important_memory)
  File "/app/metagpt/metagpt/actions/write_prd.py", line 145, in run
    prd = await self._aask_v1(prompt, "prd", OUTPUT_MAPPING)
  File "/usr/local/lib/python3.9/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
    return await fn(*args, **kwargs)
  File "/usr/local/lib/python3.9/site-packages/tenacity/_asyncio.py", line 47, in __call__
    do = self.iter(retry_state=retry_state)
  File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 326, in iter
    raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7fd4829ca760 state=finished raised ValidationError>]

tijmen avatar Aug 08 '23 12:08 tijmen

same here

magorik avatar Aug 08 '23 16:08 magorik

Pretty sure the issue is the Requirement Pool list values aren't getting parsed as tuples here, haven't found a fix yet. Example snippet of parsed_data that is expected to be tuples per OUTPUT_MAPPING:

'Requirement Pool': [
    '- End game when the snake hits the wall or itself (P0)',
    '- Implement arrow key controls for snake movement (P0)',
    '- Increase snake length when it eats food (P0)',
    '- Display current score and highest score achieved (P1)',
    '- Pause and resume the game (P1)'
  ],

bojdell avatar Aug 08 '23 23:08 bojdell

  1. Make sure your network can access to OPENAI_BASE_API.
  2. Change model to gpt-3.5-turbo-16k or gpt-4 and try again.

voidking avatar Aug 10 '23 10:08 voidking

  1. Make sure your network can access to OPENAI_BASE_API.

No problem with that.

  1. Change model to gpt-3.5-turbo-16k or gpt-4 and try again.

I'm using gpt-3.5-turbo-16k.

tijmen avatar Aug 10 '23 10:08 tijmen

This problem actually comes from the poor Instruction Following of gpt-3.5-turbo. gpt-4 basically does not have this problem

geekan avatar Sep 09 '23 03:09 geekan