asyncpg
asyncpg copied to clipboard
connection was closed in the middle of operation
- asyncpg version: 0.15.0
- PostgreSQL version: 10.3
- local PostgreSQL:
- Python version: 3.6.5
- Platform: Ubuntu 14.04.5
- Do you use pgbouncer?: no
- Did you install asyncpg with pip?: yes
I use asyncpg
with sanic
. Before server start, a connection pool is made and attached to the app. In every route, the handler acquire a connection if pg access is needed. The problem is, if the pool is silent for too long, new acquired connection is not usable.
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
After one-time exception. The next acquisition is normal again. No exception with queries. Which is really weird for me.
I've tried to twist max_inactive_connection_lifetime
parameter, got no luck. Actually I don't quite understand this parameter. Why would I need this parameter?
Any help is welcomed.
max_inactive_connection_lifetime
is used to automatically close unused connections in the pool after a period of inactivity. Please provide an example of the code that triggers this.
Thank you for your response. I use Sanic as I've said. Here's the code.
DB_CONFIG = {
'host': '127.0.0.1',
'user': 'postgres',
'password': 'pwabc',
'port': 5432,
'database': 'foo'
}
@app.listener('before_server_start')
async def init(app, loop):
app.pool = await create_pool(
**DB_CONFIG, loop=loop,
max_size=10, max_inactive_connection_lifetime=100
)
@bp.route('/info', methods=['GET'])
async def info(request):
async with request.app.pool.acquire() as conn:
d = await conn.fetchrow('''SELECT * FROM info''')
return json(dict(d))
I know this is not the solution, but I altered the max_inactive_connection_lifetime
to a radical 3
seconds, and the problem seems to gone now.
Please post a complete traceback for the error you're getting. Had it been the culprit, max_inactive_connection_lifetime
set to 3
would make this error occur more frequently, not less.
Here the full trackback.
[2018-06-14 13:45:36 +0800] [37430] [ERROR] ConnectionResetError: [Errno 104] Connection reset by peer
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/sanic/app.py", line 556, in handle_request
response = await response
File "/usr/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
return self.gen.send(None)
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/sanic_jwt/decorators.py", line 35, in decorated_function
response = await response
File "/usr/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
return self.gen.send(None)
File "/home/user/Projects/dfdc/app/routes/commit.py", line 407, in commit_detail
tts = await asyncio.gather(*[gather_person_tracks(p) for p in persons])
File "/usr/lib/python3.6/asyncio/coroutines.py", line 126, in send
return self.gen.send(value)
File "/home/user/Projects/dfdc/app/routes/commit.py", line 331, in gather_person_tracks
person['id'])
File "/usr/lib/python3.6/asyncio/coroutines.py", line 129, in throw
return self.gen.throw(type, value, traceback)
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/asyncpg/connection.py", line 359, in fetch
return await self._execute(query, args, 0, timeout)
File "/usr/lib/python3.6/asyncio/coroutines.py", line 129, in throw
return self.gen.throw(type, value, traceback)
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/asyncpg/connection.py", line 1303, in _execute
query, args, limit, timeout, return_status=return_status)
File "/usr/lib/python3.6/asyncio/coroutines.py", line 129, in throw
return self.gen.throw(type, value, traceback)
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/asyncpg/connection.py", line 1311, in __execute
return await self._do_execute(query, executor, timeout)
File "/usr/lib/python3.6/asyncio/coroutines.py", line 129, in throw
return self.gen.throw(type, value, traceback)
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/asyncpg/connection.py", line 1323, in _do_execute
stmt = await self._get_statement(query, None)
File "/usr/lib/python3.6/asyncio/coroutines.py", line 129, in throw
return self.gen.throw(type, value, traceback)
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/asyncpg/connection.py", line 288, in _get_statement
statement = await self._protocol.prepare(stmt_name, query, timeout)
File "asyncpg/protocol/protocol.pyx", line 171, in prepare
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
[2018-06-14 13:45:36 +0800] [37430] [ERROR] Exception occurred while handling uri: "http://192.168.1.29:7230/person/commit/90?order=id&page=1&src=&state=&step=5"
ConnectionResetError: [Errno 104] Connection reset by peer
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/sanic/app.py", line 556, in handle_request
response = await response
File "/usr/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
return self.gen.send(None)
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/sanic_jwt/decorators.py", line 35, in decorated_function
response = await response
File "/usr/lib/python3.6/asyncio/coroutines.py", line 110, in __next__
return self.gen.send(None)
File "/home/user/Projects/dfdc/app/routes/commit.py", line 407, in commit_detail
tts = await asyncio.gather(*[gather_person_tracks(p) for p in persons])
File "/usr/lib/python3.6/asyncio/coroutines.py", line 126, in send
return self.gen.send(value)
File "/home/user/Projects/dfdc/app/routes/commit.py", line 331, in gather_person_tracks
person['id'])
File "/usr/lib/python3.6/asyncio/coroutines.py", line 129, in throw
return self.gen.throw(type, value, traceback)
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/asyncpg/connection.py", line 359, in fetch
return await self._execute(query, args, 0, timeout)
File "/usr/lib/python3.6/asyncio/coroutines.py", line 129, in throw
return self.gen.throw(type, value, traceback)
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/asyncpg/connection.py", line 1303, in _execute
query, args, limit, timeout, return_status=return_status)
File "/usr/lib/python3.6/asyncio/coroutines.py", line 129, in throw
return self.gen.throw(type, value, traceback)
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/asyncpg/connection.py", line 1311, in __execute
return await self._do_execute(query, executor, timeout)
File "/usr/lib/python3.6/asyncio/coroutines.py", line 129, in throw
return self.gen.throw(type, value, traceback)
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/asyncpg/connection.py", line 1323, in _do_execute
stmt = await self._get_statement(query, None)
File "/usr/lib/python3.6/asyncio/coroutines.py", line 129, in throw
return self.gen.throw(type, value, traceback)
File "/home/user/Projects/dfdc/env/lib/python3.6/site-packages/asyncpg/connection.py", line 288, in _get_statement
statement = await self._protocol.prepare(stmt_name, query, timeout)
File "asyncpg/protocol/protocol.pyx", line 171, in prepare
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
It looks like your server is dropping the connections for some reason:
ConnectionResetError: [Errno 104] Connection reset by peer
Your setting max_inactive_connection_lifetime
to a very low value "fixes" that because it makes the pool recycle the connections all the time. It would be interesting to look at the relevant portion of PostgreSQL log.
Sorry it took so long.
2018-05-31 09:30:24.272 UTC [1985] LOG: could not receive data from client: Connection reset by peer
2018-05-31 09:30:24.273 UTC [1979] LOG: could not receive data from client: Connection reset by peer
2018-05-31 09:30:24.274 UTC [1986] LOG: could not receive data from client: Connection reset by peer
2018-05-31 09:30:24.274 UTC [1991] LOG: could not receive data from client: Connection reset by peer
2018-05-31 09:30:24.276 UTC [1980] LOG: could not receive data from client: Connection reset by peer
2018-05-31 09:30:24.277 UTC [1987] LOG: could not receive data from client: Connection reset by peer
2018-05-31 09:30:24.277 UTC [1990] LOG: could not receive data from client: Connection reset by peer
2018-05-31 09:30:24.278 UTC [1982] LOG: could not receive data from client: Connection reset by peer
2018-05-31 09:30:24.279 UTC [1989] LOG: could not receive data from client: Connection reset by peer
2018-05-31 09:30:24.279 UTC [1984] LOG: could not receive data from client: Connection reset by peer
2018-05-31 10:39:13.025 UTC [2114] LOG: could not receive data from client: Connection reset by peer
2018-06-01 11:05:16.170 UTC [4073] FATAL: sorry, too many clients already
2018-06-01 11:05:16.170 UTC [4068] FATAL: sorry, too many clients already
2018-06-01 11:05:16.170 UTC [4074] FATAL: sorry, too many clients already
2018-06-01 11:05:16.170 UTC [4071] FATAL: sorry, too many clients already
2018-06-01 11:05:16.170 UTC [4063] FATAL: sorry, too many clients already
2018-06-01 11:05:16.170 UTC [4069] FATAL: sorry, too many clients already
This max_inactive_connection_lifetime
work around has cost me something, seems like.
I have a similar problem:
asyncpg version: 0.16.0 PostgreSQL version: 9.6 local PostgreSQL: Python version: 3.6.4 Platform: Freebsd 11.1 Do you use pgbouncer?: no Did you install asyncpg with pip?: yes
File "/usr/local/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/home/admin/bot/app/__main__.py", line 5, in <module>
web_app()
File "/usr/home/admin/bot/aiohttp_boilerplate/bootstrap/__init__.py", line 40, in web_app
loop=loop,
File "uvloop/loop.pyx", line 1448, in uvloop.loop.Loop.run_until_complete
File "/usr/home/admin/bot/aiohttp_boilerplate/dbpool/pg.py", line 51, in create_pool
setup=setup_connection
File "/usr/home/admin/bot/env/lib/python3.6/site-packages/asyncpg/pool.py", line 403, in _async__init__
await first_ch.connect()
File "/usr/home/admin/bot/env/lib/python3.6/site-packages/asyncpg/pool.py", line 125, in connect
self._con = await self._pool._get_new_connection()
File "/usr/home/admin/bot/env/lib/python3.6/site-packages/asyncpg/pool.py", line 452, in _get_new_connection
**self._connect_kwargs)
File "/usr/home/admin/bot/env/lib/python3.6/site-packages/asyncpg/connection.py", line 1577, in connect
max_cacheable_statement_size=max_cacheable_statement_size)
File "/usr/home/admin/bot/env/lib/python3.6/site-packages/asyncpg/connect_utils.py", line 426, in _connect
connection_class=connection_class)
File "/usr/home/admin/bot/env/lib/python3.6/site-packages/asyncpg/connect_utils.py", line 402, in _connect_addr
await asyncio.wait_for(connected, loop=loop, timeout=timeout)
File "/usr/local/lib/python3.6/asyncio/tasks.py", line 358, in wait_for
return fut.result()
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
On the postgresql side:
LOG: invalid length of startup packet
Not sure why its happening and where should I dig too ...
#313 helped Thank you for so quick reflection!
@Jeffwhen we had the same problem when deploying with docker-swarm.
In our case, the root cause was that ipvs, used by swarm to route packets, have default expiration time for idle connections set to 900 seconds. So if connection had no activity for more than 15 minutes, ipvs broke it. 900 seconds is significantly less than default linux tcp keepalive setting (7200 seconds) used by most of the services that can send keepalive tcp packets to keep connections from going idle.
The same problem is described here https://github.com/moby/moby/issues/31208
To fix this we had to set the following in postgresql.conf
:
tcp_keepalives_idle = 600 # TCP_KEEPIDLE, in seconds;
# 0 selects the system default
tcp_keepalives_interval = 30 # TCP_KEEPINTVL, in seconds;
# 0 selects the system default
tcp_keepalives_count = 10 # TCP_KEEPCNT;
# 0 selects the system default
These settings are forcing PostgreSQL to keep connections from going idle by sending keepalive packets more often than ipvs default setting (that we can't change in docker-swarm, sadly).
I guess the same could be achieved by changing corresponding linux settings (net.ipv4.tcp_keepalive_time
and the like), 'cause PostgreSQL uses them by default, but in our case changing these was a bit more cumbersome.
@q210 Yes I'm deploying postsql with docker-swarm. I've tried it out, your solution absolutely works.
Since #313 is closed, I am reporting the error here as well.
- asyncpg version: 0.18.2 and master
- PostgreSQL version: 9.6.10 and 10.6
- Do you use a PostgreSQL SaaS? NO
- Python version: 3.6.6
- Platform: FreeBSD 11.2
- Do you use pgbouncer?: NO
- Did you install asyncpg with pip?: YES (used setup.py for master)
- If you built asyncpg locally, which version of Cython did you use?: 0.29
- Can the issue be reproduced under both asyncio and uvloop?: YES
Test code:
(env) freebsd11% cat test.py
import asyncio
import asyncpg
import uvloop
async def run():
conn = await asyncpg.connect(database='testdb')
values = await conn.fetch('select * from test')
await conn.close()
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
Python log:
Traceback (most recent call last):
File "./test.py", line 14, in <module>
loop.run_until_complete(run())
File "uvloop/loop.pyx", line 1446, in uvloop.loop.Loop.run_until_complete
File "./test.py", line 7, in run
conn = await asyncpg.connect(database='testdb')
File "/usr/home/dev/env/lib/python3.6/site-packages/asyncpg-0.19.0.dev0+43a7b21-py3.6-freebsd-11.2-RELEASE-amd64.egg/asyncpg/connection.py", line 1688, in connect
max_cacheable_statement_size=max_cacheable_statement_size)
File "/usr/home/dev/env/lib/python3.6/site-packages/asyncpg-0.19.0.dev0+43a7b21-py3.6-freebsd-11.2-RELEASE-amd64.egg/asyncpg/connect_utils.py", line 543, in _connect
connection_class=connection_class)
File "/usr/home/dev/env/lib/python3.6/site-packages/asyncpg-0.19.0.dev0+43a7b21-py3.6-freebsd-11.2-RELEASE-amd64.egg/asyncpg/connect_utils.py", line 519, in _connect_addr
await asyncio.wait_for(connected, loop=loop, timeout=timeout)
File "/usr/local/lib/python3.6/asyncio/tasks.py", line 358, in wait_for
return fut.result()
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
Here is the postgres log
DEBUG: 00000: forked new backend, pid=4606 socket=9
LOCATION: BackendStartup, postmaster.c:4038
LOG: 00000: connection received: host=[local]
LOCATION: BackendInitialize, postmaster.c:4192
LOG: 08P01: invalid length of startup packet
LOCATION: ProcessStartupPacket, postmaster.c:1942
DEBUG: 00000: shmem_exit(0): 0 before_shmem_exit callbacks to make
LOCATION: shmem_exit, ipc.c:226
DEBUG: 00000: shmem_exit(0): 0 on_shmem_exit callbacks to make
LOCATION: shmem_exit, ipc.c:259
DEBUG: 00000: proc_exit(0): 1 callbacks to make
LOCATION: proc_exit_prepare, ipc.c:188
DEBUG: 00000: exit(0)
LOCATION: proc_exit, ipc.c:141
DEBUG: 00000: shmem_exit(-1): 0 before_shmem_exit callbacks to make
LOCATION: shmem_exit, ipc.c:226
DEBUG: 00000: shmem_exit(-1): 0 on_shmem_exit callbacks to make
LOCATION: shmem_exit, ipc.c:259
DEBUG: 00000: proc_exit(-1): 0 callbacks to make
LOCATION: proc_exit_prepare, ipc.c:188
DEBUG: 00000: reaping dead processes
LOCATION: reaper, postmaster.c:2779
DEBUG: 00000: server process (PID 4606) exited with exit code 0
LOCATION: LogChildExit, postmaster.c:3552
Happens with the latest released version as well as with master compiled locally.
I can confirm something broke the FreeBSD support between 0.17.0 and 0.18.0
Hi all! I can confirm @q210 's solution works for our swarm setup. For everyone coming from the search (and using docker swarm), here is a simple modification that fixes the problem:
In your postgresql server docker compose file:
services:
pg:
image: postgres
command:
- postgres
# the following lines tell pg server to adjust the TCP keepalive settings explicitly
# instead of reading from the container default, which is likely idle=7200 (seconds).
# The default value in the container is usually much larger than docker-swarm's IPVS default,
# which is 900. (And this is the culprit of the connection will be closed after ~15mins)
- -c
- 'tcp_keepalives_idle=600'
- -c
- 'tcp_keepalives_interval=30'
- -c
- 'tcp_keepalives_count=10'
ref: https://github.com/moby/moby/issues/31208
Hey, having same issue, but i'm not using docker-swram, i'm using FastApi on k8s
i made some changes to add following settings: max_inactive_connection_lifetime=900, min_size=5, max_size=10), but still seeing this issue, wondering if there is a way to add pool_use_lifo=True, pool_pre_ping=True as per sqlalchemy https://docs.sqlalchemy.org/en/13/core/pooling.html, so closed connection we have in the pool are replaced with a "healthy one".
this issue happens randomly during the day.
these are packages version i'm using.
- python=3.7.5
- pandas=1.1.0
- pip
- pip:
- fastapi==0.61.0
- uvicorn==0.11.8
- mlflow==1.10.0
- codetiming==1.2.0
- databases==0.3.2
- databases[postgresql]
- asyncpg==0.21.0
this is the full error stack
==================================== start ======================
uncaught exception: ConnectionResetError: [Errno 104] Connection reset by peer
The above exception was the direct cause of the following exception:
Traceback (most recent call last): ...... rows = await self.engine.fetch_all(sql) File "/root/anaconda/lib/python3.7/site-packages/databases/core.py", line 140, in fetch_all return await connection.fetch_all(query, values) File "/root/anaconda/lib/python3.7/site-packages/databases/core.py", line 237, in fetch_all return await self._connection.fetch_all(built_query) File "/root/anaconda/lib/python3.7/site-packages/databases/backends/postgres.py", line 147, in fetch_all rows = await self._connection.fetch(query, *args) File "/root/anaconda/lib/python3.7/site-packages/asyncpg/connection.py", line 443, in fetch return await self._execute(query, args, 0, timeout) File "/root/anaconda/lib/python3.7/site-packages/asyncpg/connection.py", line 1446, in _execute query, args, limit, timeout, return_status=return_status) File "/root/anaconda/lib/python3.7/site-packages/asyncpg/connection.py", line 1454, in __execute return await self._do_execute(query, executor, timeout) File "/root/anaconda/lib/python3.7/site-packages/asyncpg/connection.py", line 1466, in _do_execute stmt = await self._get_statement(query, None) File "/root/anaconda/lib/python3.7/site-packages/asyncpg/connection.py", line 351, in _get_statement statement = await self._protocol.prepare(stmt_name, query, timeout) File "asyncpg/protocol/protocol.pyx", line 163, in prepare asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
100.64.128.99:31620 - "POST /invocations HTTP/1.0" 500 [2020-10-29 17:35:53 +0000] [8] [ERROR] Exception in ASGI application Traceback (most recent call last): File "/root/anaconda/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 390, in run_asgi result = await app(self.scope, self.receive, self.send) File "/root/anaconda/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in call return await self.app(scope, receive, send) File "/root/anaconda/lib/python3.7/site-packages/fastapi/applications.py", line 179, in call await super().call(scope, receive, send) File "/root/anaconda/lib/python3.7/site-packages/starlette/applications.py", line 111, in call await self.middleware_stack(scope, receive, send) File "/root/anaconda/lib/python3.7/site-packages/starlette/middleware/errors.py", line 181, in call raise exc from None File "/root/anaconda/lib/python3.7/site-packages/starlette/middleware/errors.py", line 159, in call await self.app(scope, receive, _send) File "/root/anaconda/lib/python3.7/site-packages/starlette/exceptions.py", line 82, in call raise exc from None File "/root/anaconda/lib/python3.7/site-packages/starlette/exceptions.py", line 71, in call await self.app(scope, receive, sender) File "/root/anaconda/lib/python3.7/site-packages/starlette/routing.py", line 566, in call await route.handle(scope, receive, send) File "/root/anaconda/lib/python3.7/site-packages/starlette/routing.py", line 227, in handle await self.app(scope, receive, send) File "/root/anaconda/lib/python3.7/site-packages/starlette/routing.py", line 41, in app response = await func(request) File "/root/anaconda/lib/python3.7/site-packages/fastapi/routing.py", line 183, in app dependant=dependant, values=values, is_coroutine=is_coroutine File "/root/anaconda/lib/python3.7/site-packages/fastapi/routing.py", line 133, in run_endpoint_function return await dependant.call(**values) File "/root/anaconda/lib/python3.7/site-packages/newrelic/common/async_proxy.py", line 124, in throw return self.wrapped.throw(args, **kwargs) File "/app/api/main.py", line 79, in invocations response = await service.predict(request) File "/app/api/personas_classification_service.py", line 159, in predict userBehaviorPredictions, currentPrediction = await asyncio.gather([t1, t2]) File "/app/api/personas_classification_service.py", line 96, in __computeCurrentPrediction mcmid_data = await self.__query_raw_visitor_data(mcmid) File "/app/api/personas_classification_service.py", line 90, in __query_raw_visitor_data return await self.mlpredictionsRepository.query_raw_visitor_data(mcmid, days_for_predictions) File "/app/api/mlpredictions_repository.py", line 18, in query_raw_visitor_data rows = await self.engine.fetch_all(sql) File "/root/anaconda/lib/python3.7/site-packages/databases/core.py", line 140, in fetch_all return await connection.fetch_all(query, values) File "/root/anaconda/lib/python3.7/site-packages/databases/core.py", line 237, in fetch_all return await self._connection.fetch_all(built_query) File "/root/anaconda/lib/python3.7/site-packages/databases/backends/postgres.py", line 147, in fetch_all rows = await self._connection.fetch(query, *args) File "/root/anaconda/lib/python3.7/site-packages/asyncpg/connection.py", line 443, in fetch return await self._execute(query, args, 0, timeout) File "/root/anaconda/lib/python3.7/site-packages/asyncpg/connection.py", line 1446, in _execute query, args, limit, timeout, return_status=return_status) File "/root/anaconda/lib/python3.7/site-packages/asyncpg/connection.py", line 1454, in __execute return await self._do_execute(query, executor, timeout) File "/root/anaconda/lib/python3.7/site-packages/asyncpg/connection.py", line 1466, in _do_execute stmt = await self._get_statement(query, None) File "/root/anaconda/lib/python3.7/site-packages/asyncpg/connection.py", line 351, in _get_statement statement = await self._protocol.prepare(stmt_name, query, timeout) File "asyncpg/protocol/protocol.pyx", line 163, in prepare asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
Since #313 is closed, I am reporting the error here as well.
- asyncpg version: 0.18.2 and master
- PostgreSQL version: 9.6.10 and 10.6
- Do you use a PostgreSQL SaaS? NO
- Python version: 3.6.6
- Platform: FreeBSD 11.2
- Do you use pgbouncer?: NO
- Did you install asyncpg with pip?: YES (used setup.py for master)
- If you built asyncpg locally, which version of Cython did you use?: 0.29
- Can the issue be reproduced under both asyncio and uvloop?: YES
Test code:
(env) freebsd11% cat test.py import asyncio import asyncpg import uvloop async def run(): conn = await asyncpg.connect(database='testdb') values = await conn.fetch('select * from test') await conn.close() asyncio.set_event_loop_policy(uvloop.EventLoopPolicy()) loop = asyncio.get_event_loop() loop.run_until_complete(run())
Python log:
Traceback (most recent call last): File "./test.py", line 14, in <module> loop.run_until_complete(run()) File "uvloop/loop.pyx", line 1446, in uvloop.loop.Loop.run_until_complete File "./test.py", line 7, in run conn = await asyncpg.connect(database='testdb') File "/usr/home/dev/env/lib/python3.6/site-packages/asyncpg-0.19.0.dev0+43a7b21-py3.6-freebsd-11.2-RELEASE-amd64.egg/asyncpg/connection.py", line 1688, in connect max_cacheable_statement_size=max_cacheable_statement_size) File "/usr/home/dev/env/lib/python3.6/site-packages/asyncpg-0.19.0.dev0+43a7b21-py3.6-freebsd-11.2-RELEASE-amd64.egg/asyncpg/connect_utils.py", line 543, in _connect connection_class=connection_class) File "/usr/home/dev/env/lib/python3.6/site-packages/asyncpg-0.19.0.dev0+43a7b21-py3.6-freebsd-11.2-RELEASE-amd64.egg/asyncpg/connect_utils.py", line 519, in _connect_addr await asyncio.wait_for(connected, loop=loop, timeout=timeout) File "/usr/local/lib/python3.6/asyncio/tasks.py", line 358, in wait_for return fut.result() asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
Here is the postgres log
DEBUG: 00000: forked new backend, pid=4606 socket=9 LOCATION: BackendStartup, postmaster.c:4038 LOG: 00000: connection received: host=[local] LOCATION: BackendInitialize, postmaster.c:4192 LOG: 08P01: invalid length of startup packet LOCATION: ProcessStartupPacket, postmaster.c:1942 DEBUG: 00000: shmem_exit(0): 0 before_shmem_exit callbacks to make LOCATION: shmem_exit, ipc.c:226 DEBUG: 00000: shmem_exit(0): 0 on_shmem_exit callbacks to make LOCATION: shmem_exit, ipc.c:259 DEBUG: 00000: proc_exit(0): 1 callbacks to make LOCATION: proc_exit_prepare, ipc.c:188 DEBUG: 00000: exit(0) LOCATION: proc_exit, ipc.c:141 DEBUG: 00000: shmem_exit(-1): 0 before_shmem_exit callbacks to make LOCATION: shmem_exit, ipc.c:226 DEBUG: 00000: shmem_exit(-1): 0 on_shmem_exit callbacks to make LOCATION: shmem_exit, ipc.c:259 DEBUG: 00000: proc_exit(-1): 0 callbacks to make LOCATION: proc_exit_prepare, ipc.c:188 DEBUG: 00000: reaping dead processes LOCATION: reaper, postmaster.c:2779 DEBUG: 00000: server process (PID 4606) exited with exit code 0 LOCATION: LogChildExit, postmaster.c:3552
Happens with the latest released version as well as with master compiled locally.
Same thing happens for me with bare connections on Ubuntu 19.04, asyncpg >=0.20, python 3.7.3 Since it's a bare connection we haven't the pool config for max_inactive_connection_lifetime
This issue still persists
Hi @elprans
I checked the connection with function is_closed return "False" but when I fetchval got the error as below
asyncpg==0.21.0
uvicorn==0.13.3
Python 3.8.2
thanks @elprans
Traceback (most recent call last):
File "test_code.py", line 114, in
fix it: https://stackoverflow.com/questions/56922665/connection-was-closed-in-the-middle-of-operation-when-accesing-database-using-py
This bug happens only with uvloop
installed
shouldn't the loop be specified here? I guess is ok...
https://github.com/MagicStack/asyncpg/blob/383c711eb68bc6a042c121e1fddfde0cdefb8068/asyncpg/connect_utils.py#L811
NOTE: I'm using nest_asyncio and the loop becomes tricky
This still happens.
asyncpg version: 0.25.0
PostgreSQL version: 14.0
Python version: 3.9.7
Platform: Arch Linux
Do you use pgbouncer?: yes
Did you install asyncpg with pip?: yes
But interestingly, this advice fixed it for me:
wondering if there is a way to add pool_use_lifo=True, pool_pre_ping=True as per sqlalchemy
With those parameters passed to the SQLAlchemy engine creation function, I no longer see this issue (10-20 events per day on a 10 req/s app to 0 events in a week), not sure if the LIFO or the pre-ping or the combination of both, but certainly the SQLAlchemy pooling method solves the underlying issue.
In my case, I had execute
in an async function. Execute can throw exceptions and I did not handle exceptions when calling the async function and I also did not do any handling around the execute itself. When the DB threw the error I would not get an exception backtrace, instead I got a Task exception was never retrieved
warning. After getting this warning once the connection in the pool was dead.
Surrounding my async function call with try ... except
handling fixed it in my environment.
I had the function defined like this:
async def insert_points(
app_instance: Sanic, points: List[LocationPoint]
) -> None:
# simple code to generate SQL here which I left out
await app_instance.config["pool"].execute(sql, *parameters)
And surrounding its call with try .. except
handling like this fixed the issue, before I didn't have that.
try:
await insert_points(app_instance, points)
except Exception as e:
logger.warn(f"insert_points failed with: {type(e).__name__}: {str(e)}")
I was able to simulate this problem by raising randomly (for example 1 after 1000 calls) in insert_points
, if an exception was raised once I was not able to insert new points into the DB anymore.
asyncpg version: 0.25.0 PostgreSQL version: 12 local PostgreSQL: Python version: 3.8.13 Platform: debian:bullseye Do you use pgbouncer?: yes Did you install asyncpg with pip?: by poetry
Hello. For version 0.22.0 - 0.25.0, I have a problem periodically (several times a day) when, after the connection to an external http server has expired and when trying to save it in the database, I get the error 'ConnectionDoesNotExistError' from asyncpg. I use uv loop=0.16.0 and tornado=6.1
Stack trace:
connection was closed in the middle of operation
Traceback (most recent call last):
File "uvloop/sslproto.pyx", line 699, in uvloop.loop.SSLProtocol._do_read
File "uvloop/sslproto.pyx", line 764, in uvloop.loop.SSLProtocol._do_read__copied
File "/usr/local/lib/python3.8/ssl.py", line 888, in read
v = self._sslobj.read(len)
ssl.SSLError: [SSL: SHUTDOWN_WHILE_IN_INIT] shutdown while in init (_ssl.c:2635)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/asyncpg/connection.py", line 679, in fetchrow
data = await self._execute(
File "/usr/local/lib/python3.8/site-packages/asyncpg/connection.py", line 1659, in _execute
result, _ = await self.__execute(
File "/usr/local/lib/python3.8/site-packages/asyncpg/connection.py", line 1684, in __execute
return await self._do_execute(
File "/usr/local/lib/python3.8/site-packages/asyncpg/connection.py", line 1731, in _do_execute
result = await executor(stmt, None)
File "asyncpg/protocol/protocol.pyx", line 201, in bind_execute
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
PS This happens after I make an http request to an external server, which fails with a connection timeout error:
HTTP 599: Operation timed out after 5000 milliseconds with 0 out of 0 bytes received
Any help is welcomed.
Simplified version of code:
conn = await pool.acquire()
res = await conn.fetch(sql)
await pool.release(conn)
The connection received from the pool has no errors. But executing an sql query causes an exception asyncpg.exceptions.ConnectionDoesNotExistError
The number of database connections is about 100. This error appears rarely but regularly since version 0.22.0
I think I have a similar issue.
- asyncpg version: 0.26.0
- PostgreSQL version: 14
- Do you use a PostgreSQL SaaS? If so, which? Can you reproduce the issue with a local PostgreSQL install?: Supabase
- Python version: 3.10
- Platform: bullseye
- Do you use pgbouncer?: yes, session pooling
- Did you install asyncpg with pip?: yes, poetry
- If you built asyncpg locally, which version of Cython did you use?:
- Can the issue be reproduced under both asyncio and uvloop?: only tried with uvloop
Error occurs when trying to release a connection, I think after executing a query but maybe when it errors.
Traceback (most recent call last):
File "/app/asyncpg/pool.py", line 222, in release
self._con.terminate()
AttributeError: 'NoneType' object has no attribute 'terminate'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/pqueue.py", line 474, in <module>
asyncio.run(PQueue().main())
File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/app/pqueue.py", line 346, in main
prompt = await self.get_prompt()
File "/app/pqueue.py", line 252, in get_prompt
maybe_id = await self.queue.maybe_get_id(self.model)
File "/app/forest/pghelp.py", line 191, in execute
async with self.pool.acquire() as connection:
File "/app/asyncpg/pool.py", line 1009, in __aexit__
await self.pool.release(con)
File "/app/asyncpg/pool.py", line 883, in release
return await asyncio.shield(ch.release(timeout))
File "/app/asyncpg/pool.py", line 224, in release
raise ex
File "/app/asyncpg/pool.py", line 214, in release
await self._con.reset(timeout=budget)
File "/app/asyncpg/connection.py", line 1367, in reset
await self.execute(reset_query, timeout=timeout)
File "/app/asyncpg/connection.py", line 318, in execute
return await self._protocol.query(query, timeout)
File "asyncpg/protocol/protocol.pyx", line 338, in query
return await waiter
asyncpg.exceptions.ConnectionDoesNotExistError: connection was closed in the middle of operation
Hi, any updates on this one ? I face the exact same error.
Hi. In my case, this error is preceeded this exception:
ssl.SSLError: [SSL: SHUTDOWN_WHILE_IN_INIT] shutdown while in init (_ssl.c:2635)
I think this is some kind of race condition in the ssl library (i also use aiohttp client with the ssl context in the code). Since asyncpg version 0.22.0, ssl is the prefered method of connecting to the database, even if this protocol is not used. So disabling the use of ssl when connection to the database, if it is useless, solved this problem for me.
asyncpg.create_pool(..., ssl=False)
- asyncpg version: 0.26.0
- PostgreSQL version: 15.2
- local PostgreSQL:
- Python version: 3.11
- Platform: docker, (postgres built with
FROM debian:bullseye-slim
) - Do you use pgbouncer?: yes,
1.18.0
- Did you install asyncpg with pip?: yes
Fixed this by increasing the default_pool_size
to match the maximum amount of connections to pgbouncer.
syncpg version: 0.29.0 PostgreSQL version: 15 local PostgreSQL: Python version: 3.11.6 Platform: debian:bullseye Do you use pgbouncer?: yes Did you install asyncpg with pip?: poetry
Just wanted to chime in on this. I am experiencing this somewhat frequently. Several comments reference uvloop
as the cause. I removed uvloop
and it did not solve the issue for me.
My logging for this wraps it in a duration check, and I noticed this was taking exactly the time of my configured statement timeout. I don't know why its raising this error on a statement timeout, but that appears to be the issue for me.
I faced the same issue in FastAPI and just wanted to add this for future readers. As mentioned by @robertmujica, using connection pooling through create_async_engine()
or create_engine()
and passing pool_pre_ping=True
argument resolves the issue. According to SQLAlchemy docs, setting this lets SQLAlchemy check the connection for availability through ping or a simple select statement before using it for database operations. If the connection is already closed, a new one is established. This adds a very small overhead but this worths it and the impact on performance is negligible.
Although SQLAlchemy has mentioned that setting pool_pre_ping
to True
doesn't handle the connections disconnected in the middle of a transaction, for me this fixed the issue. Technically, a connection should not get disconnected in the middle of a transaction due to mismatching keepalive settings.
This works for me:
from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker
# Create DB engine
engine = create_async_engine('postgresql+asyncpg://postgres:postgres@db:5432/mydb',
echo=True,
pool_size=10,
pool_pre_ping=True,
max_overflow=0, future=True)
# Create async local session
AsyncSessionLocal = async_sessionmaker(autocommit=False, autoflush=False, bind=engine)
# Dependency for FastAPI
async def get_session():
async with AsyncSessionLocal() as session:
yield session