gptme
gptme copied to clipboard
Browser tool crashes and deadlocks on subsequent request
trafficstars
User: https://gptme.org/docs/evals.html
[22:37:50] ERROR Error in browser thread _browser_thread.py:76
Traceback (most recent call last):
File "/home/erb/Programming/gptme/gptme/tools/_browser_thread.py", line 72, in _run
result = cmd.func(browser, *cmd.args, **cmd.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/erb/Programming/gptme/gptme/tools/_browser_playwright.py", line 30, in _load_page
context = browser.new_context(
^^^^^^^^^^^^^^^^^^^^
File
"/home/erb/.cache/pypoetry/virtualenvs/gptme--mVb8r6G-py3.12/lib/python3.12/site-packages/playwright/sync_api/_genera
ted.py", line 13928, in new_context
self._sync(
File
"/home/erb/.cache/pypoetry/virtualenvs/gptme--mVb8r6G-py3.12/lib/python3.12/site-packages/playwright/_impl/_sync_base
.py", line 115, in _sync
return task.result()
^^^^^^^^^^^^^
File
"/home/erb/.cache/pypoetry/virtualenvs/gptme--mVb8r6G-py3.12/lib/python3.12/site-packages/playwright/_impl/_browser.p
y", line 129, in new_context
channel = await self._channel.send("newContext", params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/home/erb/.cache/pypoetry/virtualenvs/gptme--mVb8r6G-py3.12/lib/python3.12/site-packages/playwright/_impl/_connectio
n.py", line 61, in send
return await self._connection.wrap_api_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File
"/home/erb/.cache/pypoetry/virtualenvs/gptme--mVb8r6G-py3.12/lib/python3.12/site-packages/playwright/_impl/_connectio
n.py", line 528, in wrap_api_call
raise rewrite_error(error, f"{parsed_st['apiName']}: {error}") from None
Exception: Browser.new_context: Connection closed while reading from the driver
WARNING Failed to read URL https://gptme.org/docs/evals.html: Browser.new_context: Connection closed while reading from the driver chat.py:491
After trying it a second time it seemed to stall/deadlock, but I eventually got
[22:38:36] WARNING Failed to read URL https://gptme.org/docs/evals.html: Browser operation timed out after 30s
Should probably fix the issue and emit some more logging messages to not confuse users during unusually long operation.