JSONDecodeError
Bug description
Project Manager transforms Architect's WriteDesign, but not as JSON then it tries to parse it as JSON
run command
metagpt.exe 'create flappy bird as a web app using only vanilla js, html, and css'
Environment information
- LLM type and model name: ollama llama3
- System version: Windows 10
- Python version: 3.10.6
- MetaGPT version or branch:
mainbranch probably, likely commit38cea1daf2b87ebc31a56c995d7857d07289fa70
- packages version:
pip 22.3.1 - installation method:
pip install --upgrade git+https://github.com/geekan/MetaGPT.git
Screenshots or logs
WriteDesign from Architect
{
"Implementation approach": "We will implement Flappy Bird as a web app using vanilla JavaScript, HTML, and CSS. We will analyze the difficult points of the requirements, select the appropriate open-source framework for responsive design, and use keyboard or touch even
ts to control the bird's movement.",
"File list": [
"index.html",
"style.css",
"script.js"
],
"Data structures and interfaces": "
classDiagram
class FlappyBirdGame {
-birdMovementAlgorithm
+startGame()
+updateGame()
+renderGame()
}
class BirdMovementAlgorithm {
-pipeMovementAlgorithm
+calculateBirdPosition()
+checkCollision()
}
class PipeMovementAlgorithm {
+generatePipes()
+movePipes()
}
FlappyBirdGame --> BirdMovementAlgorithm
BirdMovementAlgorithm --> PipeMovementAlgorithm
",
"Program call flow": "
sequenceDiagram
participant FBG as FlappyBirdGame
participant BMA as BirdMovementAlgorithm
participant PMA as PipeMovementAlgorithm
FBG->>BMA: startGame()
BMA->>PMA: generatePipes()
PMA-->>BMA: return pipes
BMA->>FBG: updateGame()
FBG->>BMA: renderGame()
",
"Anything UNCLEAR": "Clarification needed on scoring system and game over logic."
}
WriteTasks from Project Manager
{
"Implementation approach": "We will implement Flappy Bird as a web app using vanilla JavaScript, HTML, and CSS. We will analyze the difficult points of the requirements, select the appropriate open-ource framework for responsive design, and use keyboard or touch event
s to control the bird's movement.",
"File list": ["index.html", "style.css", "script.js"],
"Data structures and interfaces":
classDiagram
class FlappyBirdGame {
-birdMovementAlgorithm
+startGame()
+updateGame()
+renderGame()
}
class BirdMovementAlgorithm {
-pipeMovementAlgorithm
+calculateBirdPosition()
+checkCollision()
}
class PipeMovementAlgorithm {
+generatePipes()
+movePipes()
}
FlappyBirdGame --> BirdMovementAlgorithm
BirdMovementAlgorithm --> PipeMovementAlgorithm,
"Program call flow":
sequenceDiagram
participant FBG as FlappyBirdGame
participant BMA as BirdMovementAlgorithm
participant PMA as PipeMovementAlgorithm
FBG->>BMA: startGame()
BMA->>PMA: generatePipes()
PMA-->>BMA: return pipes
BMA->>FBG: updateGame()
FBG->>BMA: renderGame(),
"Anything UNCLEAR": "Clarification needed on scoring system and game over logic."
}
actual error from stderr
json.decoder.JSONDecodeError: Expecting value: line 5 column 9 (char 415)
Ollama is not as effective as OpenAI. Try to set repair_llm_output: true in config2.yaml to get a better output.
Still getting the same error with that config option. ~/.metagpt/config2.yaml
llm:
api_type: 'ollama'
base_url: 'http://127.0.0.1:11434/api'
model: 'llama3'
repair_llm_output: true
I'm getting a similar error with Ollama: json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) running metagpt from the commandline. I have an issue: https://github.com/geekan/MetaGPT/issues/1383. I added repair_llm_output: true to ~/.metagpt/config2.yaml, but I still get the same error.
$ metagpt "Create a 2048 game"
2024-07-07 14:37:19.995 | INFO | metagpt.const:get_metagpt_package_root:29 - Package root set to /Users/ken/devel/ai/agents/metagpt_novel
2024-07-07 14:37:27.413 | INFO | metagpt.team:invest:90 - Investment: $3.0.
2024-07-07 14:37:27.414 | INFO | metagpt.roles.role:_act:391 - Alice(Product Manager): to do PrepareDocuments(PrepareDocuments)
2024-07-07 14:37:27.535 | INFO | metagpt.utils.file_repository:save:57 - save to: /Users/ken/devel/ai/agents/metagpt_novel/workspace/20240707143727/docs/requirement.txt
2024-07-07 14:37:27.538 | INFO | metagpt.roles.role:_act:391 - Alice(Product Manager): to do WritePRD(WritePRD)
2024-07-07 14:37:27.540 | INFO | metagpt.actions.write_prd:run:86 - New requirement detected: Create a 2048 game
Sure! Here's an example of how you could fill in the nodes for a 2048 game project:
### Language
Python
### Original Requirements
Create a 2048 game where players can move tiles around to combine them and reach the goal of having a 2048 tile.
### Project Name
Game_2048
### Product Goals
1. Create an engaging user experience
2. Improve accessibility, be responsive
3. More beautiful UI
### User Stories
1. As a player, I want to be able to choose difficulty levels so that the game is challenging but not too hard or too easy.
2. As a player, I want to see my score after each game so that I can track my progress and compete with others.
3. As a player, I want to get a restart button when I lose so that I can try again without feeling frustrated.
4. As a player, I want to see beautiful UI that makes me feel good and motivates me to play more.
5. As a player, I want to play the game via mobile phone so that I can play it anywhere, anytime.
### Competitive Analysis
1. 2048 Game A: Simple interface, lacks responsive features
2. Play2048.co: Beautiful and responsive UI with my best score shown
3. 2048game.com: Responsive UI with my best score shown, but many ads
### Competitive Quadrant Chart
Title: Reach and engagement of campaigns
X-axis: Low Reach --> High Reach
Y-axis: Low Engagement --> High Engagement
Quadrant 1: We should expand
Quadrant 2: Need to promote
Quadrant 3: Re-evaluate
Quadrant 4: May be improved
Campaign A: [0.3, 0.6]
Campaign B: [0.45, 0.23]
Campaign C: [0.57, 0.69]
Campaign D: [0.78, 0.34]
Campaign E: [0.40, 0.34]
Campaign F: [0.35, 0.78]
Our Target Product: [0.5, 0.6]
### Requirement Analysis
Not applicable for this project.
### Requirement Pool
1. P0: The main code of the game should be easy to understand and modify.
2. P0: The game algorithm should be efficient and fast.
3. P1: The UI should be visually appealing and easy to use.
4. P2: The game should be responsive and work well on different devices.
### UI Design draft
Basic function description with a simple style and layout.
### Anything UNCLEAR
Nothing is unclear for this project.
2024-07-07 14:40:39.306 | WARNING | metagpt.utils.cost_manager:update_cost:49 - Model llama2:latest not found in TOKEN_COSTS.
2024-07-07 14:40:39.317 | WARNING | metagpt.utils.repair_llm_raw_output:extract_content_from_output:320 - extract_content try another pattern: \[CONTENT\]([\s\S]*)\[/CONTENT\]
2024-07-07 14:40:39.318 | WARNING | metagpt.utils.repair_llm_raw_output:run_and_passon:268 - parse json from content inside [CONTENT][/CONTENT] failed at retry 1, exp: Expecting value: line 1 column 1 (char 0)
2024-07-07 14:40:39.318 | INFO | metagpt.utils.repair_llm_raw_output:repair_invalid_json:237 - repair_invalid_json, raw error: Expecting value: line 1 column 1 (char 0)
2024-07-07 14:40:39.319 | ERROR | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 191.776(s), this was the 1st time calling it. exp: RetryError[<Future at 0x130ca46d0 state=finished raised JSONDecodeError>]
Ollama is definitely being used. The CPU usage is at 700% and my laptop is humming.
The problem is at
def _decode_and_load(self, chunk: bytes, encoding: str = "utf-8") -> dict:
chunk = chunk.decode(encoding)
return json.loads(chunk)
The error info is
chunk
'data: {"id":"chatcmpl-471","object":"chat.completion.chunk","created":1720838591,"model":"llama3","system_fingerprint":"fp_ollama","choices":[{"index":0,"delta":{"role":"assistant","content":"I"},"finish_reason":null}]}\n'
json.loads(chunk)
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/homebrew/Cellar/[email protected]/3.9.19/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/opt/homebrew/Cellar/[email protected]/3.9.19/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/homebrew/Cellar/[email protected]/3.9.19/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
maybe you can modify as follows:
def _decode_and_load(self, chunk: bytes, encoding: str = "utf-8") -> dict:
chunk = chunk.decode(encoding)
json_data = chunk.removeprefix('data: ').strip()
# print("json_data: ", json_data, "/n")
if len(json_data) == 0:
return {}
elif json_data.lower().find("done") != -1:
return {"done": True}
else:
ret = json.loads(json_data)
delta = ret.get('choices', [{}])[0].get('delta', {})
ret["message"] = delta
return ret
async for raw_chunk in stream_resp:
chunk = self._decode_and_load(raw_chunk)
if chunk == {}:
continue
if not chunk.get("done", False):
...
Due to the lack of updates or replies by the user for a long time, we will close it. Please reopen it if necessary.
This is still an issue. How many people use ollama or a local LLM?
This is still an issue. How many people use ollama or a local LLM?
This is still an issue in 2025/02/24? @better629 @jtara1 @kjenney @voidking @ZhengQiushi
- config
config2.yaml
llm:
api_type: "ollama"
model: "deepseek-r1:14b" # or gpt-3.5-turbo-1106 / gpt-4-1106-previewgit
base_url: "http://192.168.93.186:11434" # or forward url / other llm url
repair_llm_output: true
mermaid:
engine: "nodejs"
path: "mmdc"
puppeteer_config: "/app/metagpt/config/puppeteer-config.json"
pyppeteer_path: "/usr/bin/chromium"
- launch docker metagpt
$ docker run --network host --privileged -v /opt/metagpt/config/config2.yaml:/app/metagpt/config/config2.yaml -v /opt/metagpt/workspace:/app/metagpt/workspace metagpt/metagpt:latest metagpt "Write a cli snake game"
2025-02-24 11:01:12.126 | INFO | metagpt.const:get_metagpt_package_root:21 - Package root set to /app/metagpt
2025-02-24 11:01:16.099 | INFO | metagpt.team:invest:93 - Investment: $3.0.
2025-02-24 11:01:16.101 | INFO | metagpt.roles.role:_act:403 - Alice(Product Manager): to do PrepareDocuments(PrepareDocuments)
2025-02-24 11:01:16.175 | INFO | metagpt.utils.file_repository:save:57 - save to: /app/metagpt/workspace/20250224110116/docs/requirement.txt
2025-02-24 11:01:16.176 | INFO | metagpt.roles.role:_act:403 - Alice(Product Manager): to do WritePRD(WritePRD)
2025-02-24 11:01:16.177 | INFO | metagpt.actions.write_prd:run:86 - New requirement detected: Write a cli snake game
2025-02-24 11:01:16.196 | ERROR | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 0.018(s), this was the 1st time calling it. exp: Extra data: line 1 column 5 (char 4)
2025-02-24 11:01:16.569 | ERROR | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 0.391(s), this was the 2nd time calling it. exp: Extra data: line 1 column 5 (char 4)
2025-02-24 11:01:18.088 | ERROR | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 1.910(s), this was the 3rd time calling it. exp: Extra data: line 1 column 5 (char 4)
2025-02-24 11:01:19.692 | ERROR | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 3.514(s), this was the 4th time calling it. exp: Extra data: line 1 column 5 (char 4)
2025-02-24 11:01:20.731 | ERROR | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 4.553(s), this was the 5th time calling it. exp: Extra data: line 1 column 5 (char 4)
2025-02-24 11:01:26.350 | ERROR | metagpt.utils.common:log_it:554 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 10.172(s), this was the 6th time calling it. exp: Extra data: line 1 column 5 (char 4)
2025-02-24 11:01:26.351 | WARNING | metagpt.utils.common:wrapper:673 - There is a exception in role's execution, in order to resume, we delete the newest role communication message in the role's memory.
2025-02-24 11:01:26.359 | ERROR | metagpt.utils.common:wrapper:655 - Exception occurs, start to serialize the project, exp:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/tenacity/_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
File "/app/metagpt/metagpt/actions/action_node.py", line 437, in _aask_v1
content = await self.llm.aask(prompt, system_msgs, images=images, timeout=timeout)
json.decoder.JSONDecodeError: Extra data: line 1 column 5 (char 4)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/metagpt/metagpt/utils/common.py", line 664, in wrapper
return await func(self, *args, **kwargs)
File "/app/metagpt/metagpt/roles/role.py", line 551, in run
rsp = await self.react()
tenacity.RetryError: RetryError[<Future at 0x7f3a2aa98dc0 state=finished raised JSONDecodeError>]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/metagpt/metagpt/utils/common.py", line 650, in wrapper
result = await func(self, *args, **kwargs)
File "/app/metagpt/metagpt/team.py", line 134, in run
await self.env.run()
Exception: Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/tenacity/_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
File "/app/metagpt/metagpt/actions/action_node.py", line 437, in _aask_v1
content = await self.llm.aask(prompt, system_msgs, images=images, timeout=timeout)
File "/app/metagpt/metagpt/provider/base_llm.py", line 152, in aask
rsp = await self.acompletion_text(message, stream=stream, timeout=self.get_timeout(timeout))
File "/usr/local/lib/python3.9/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/tenacity/_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/usr/local/lib/python3.9/site-packages/tenacity/_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
File "/app/metagpt/metagpt/provider/base_llm.py", line 202, in acompletion_text
return await self._achat_completion_stream(messages, timeout=self.get_timeout(timeout))
File "/app/metagpt/metagpt/provider/ollama_api.py", line 252, in _achat_completion_stream
return self._processing_openai_response(resp)
File "/app/metagpt/metagpt/provider/ollama_api.py", line 257, in _processing_openai_response
resp = self.ollama_message.decode(openai_resp)
File "/app/metagpt/metagpt/provider/ollama_api.py", line 41, in decode
return json.loads(response.data.decode("utf-8"))
File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 1 column 5 (char 4)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/metagpt/metagpt/utils/common.py", line 664, in wrapper
return await func(self, *args, **kwargs)
File "/app/metagpt/metagpt/roles/role.py", line 551, in run
rsp = await self.react()
File "/app/metagpt/metagpt/roles/role.py", line 520, in react
rsp = await self._react()
File "/app/metagpt/metagpt/roles/role.py", line 475, in _react
rsp = await self._act()
File "/app/metagpt/metagpt/roles/role.py", line 404, in _act
response = await self.rc.todo.run(self.rc.history)
File "/app/metagpt/metagpt/actions/write_prd.py", line 87, in run
return await self._handle_new_requirement(req)
File "/app/metagpt/metagpt/actions/write_prd.py", line 108, in _handle_new_requirement
node = await WRITE_PRD_NODE.fill(context=context, llm=self.llm, exclude=exclude) # schema=schema
File "/app/metagpt/metagpt/actions/action_node.py", line 648, in fill
return await self.simple_fill(schema=schema, mode=mode, images=images, timeout=timeout, exclude=exclude)
File "/app/metagpt/metagpt/actions/action_node.py", line 473, in simple_fill
content, scontent = await self._aask_v1(
File "/usr/local/lib/python3.9/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/tenacity/_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x7f3a2aa98dc0 state=finished raised JSONDecodeError>]
This is no longer an issue for me using the latest version of Ollama and the latest version of MetaGPT.
This is no longer an issue for me using the latest version of Ollama and the latest version of MetaGPT.
Yes! I am using the latest versions of Olama and Metagpt.
$ ollama --version
ollama version is 0.5.12
$ ollama run deepseek-r1:14b
>>> 1+1=
<think>
</think>
The sum of 1 and 1 is:
\[
1 + 1 = \boxed{2}
\]
>>> Send a message (/? for help)
$ proxychains docker pull metagpt/metagpt:latest
ProxyChains-3.1 (http://proxychains.sf.net)
latest: Pulling from metagpt/metagpt
Digest: sha256:65e94d386a8aba2a7d9cb99e171d0897f9fccc2f3b1e4c61307fc1a2807b098b
Status: Image is up to date for metagpt/metagpt:latest
docker.io/metagpt/metagpt:latest
$ cat config2.yaml
llm:
api_type: "ollama"
model: "deepseek-r1:14b" # or gpt-3.5-turbo-1106 / gpt-4-1106-previewgit
base_url: "http://0.0.0.0:11434" # or forward url / other llm url
repair_llm_output: true
mermaid:
engine: "nodejs"
path: "mmdc"
puppeteer_config: "/app/metagpt/config/puppeteer-config.json"
pyppeteer_path: "/usr/bin/chromium"
$ docker run --network host --privileged -v /opt/metagpt/config/config2.yaml:/app/metagpt/config/config2.yaml -v /opt/metagpt/workspace:/app/metagpt/workspace metagpt/metagpt:latest metagpt "Write a cli snake game"
The ollama endpoint is http://127.0.0.1:11434/api
The ollama endpoint is http://127.0.0.1:11434/api
@kjenney Thank you very much! It works
$ cat config2.yaml
llm:
api_type: "ollama"
model: "deepseek-r1:14b" # or gpt-3.5-turbo-1106 / gpt-4-1106-previewgit
base_url: "http://127.0.0.1:11434/api" # or forward url / other llm url
repair_llm_output: true
mermaid:
engine: "nodejs"
path: "mmdc"
puppeteer_config: "/app/metagpt/config/puppeteer-config.json"
pyppeteer_path: "/usr/bin/chromium"
I am using deepseek-r1:14b for inference
$ docker run --network host --privileged -v /opt/metagpt/config/config2.yaml:/app/metagpt/config/config2.yaml -v /opt/metagpt/workspace:/app/metagpt/workspace metagpt/metagpt:latest metagpt "Write a command-line program to input any number and print how many small stars"
...
...
...
$ python3 main.py
Enter a number between 1 and 100: 10
**********
$ python3 main.py
Enter a number between 1 and 100: 33
*********************************