asyncpg icon indicating copy to clipboard operation
asyncpg copied to clipboard

TimeoutError when calling asyncpg.create_pool()

Open tehmufifnman opened this issue 3 years ago • 11 comments

  • asyncpg version: 0.24.0
  • PostgreSQL version: 12
  • Do you use a PostgreSQL SaaS? If so, which? Can you reproduce the issue with a local PostgreSQL install?: no SaaS
  • Python version: 3.9.7
  • Platform: Linux/Docker (image python:3.9-slim)
  • Do you use pgbouncer?: No
  • Did you install asyncpg with pip?: Yes
  • If you built asyncpg locally, which version of Cython did you use?: n/a
  • Can the issue be reproduced under both asyncio and uvloop?: Issue happens with both, though the below error is with uvloop

DISCLAIMER: I am relatively new to async methodologies in Python (I am primary a C# engineer), so I may be doing something completely wrong. Feel free to point anything out! Thats how we learn right? haha

I seem to get the below exception a few times an hour. I am relatively certain its not any kind of connection exhaustion as I am using async with statements like those in documentation, and other apps that use this DB are unaffected, this is the 1st app to use asyncpg.

Stack Trace:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/asyncpg/connect_utils.py", line 586, in _create_ssl_connection
    tr, pr = await loop.create_connection(
  File "uvloop/loop.pyx", line 2014, in create_connection
  File "uvloop/loop.pyx", line 2001, in uvloop.loop.Loop.create_connection
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.9/asyncio/tasks.py", line 492, in wait_for
    fut.result()
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/./scripts/scheduler_ahs.py", line 28, in process_ah
    ah_processor = await AHProcessor.create(ah_id)
  File "/app/scripts/process_ah.py", line 39, in create
    db = await Database.create()
  File "/app/lib/db.py", line 13, in create
    pool = await asyncpg.create_pool(
  File "/usr/local/lib/python3.9/site-packages/asyncpg/pool.py", line 407, in _async__init__
    await self._initialize()
  File "/usr/local/lib/python3.9/site-packages/asyncpg/pool.py", line 445, in _initialize
    await asyncio.gather(*connect_tasks)
  File "/usr/local/lib/python3.9/site-packages/asyncpg/pool.py", line 127, in connect
    self._con = await self._pool._get_new_connection()
  File "/usr/local/lib/python3.9/site-packages/asyncpg/pool.py", line 491, in _get_new_connection
    con = await connect_utils._connect_addr(
  File "/usr/local/lib/python3.9/site-packages/asyncpg/connect_utils.py", line 676, in _connect_addr
    return await __connect_addr(params, timeout, True, *args)
  File "/usr/local/lib/python3.9/site-packages/asyncpg/connect_utils.py", line 720, in __connect_addr
    tr, pr = await compat.wait_for(connector, timeout=timeout)
  File "/usr/local/lib/python3.9/site-packages/asyncpg/compat.py", line 66, in wait_for
    return await asyncio.wait_for(fut, timeout)
  File "/usr/local/lib/python3.9/asyncio/tasks.py", line 494, in wait_for
    raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError

I am using a DB wrapper class. In my actual app I create an instance of the below class and then call the close_pool() method when my app is done using the database.

class Database:
    def __init__(self, pool: asyncpg.Pool) -> None:
        self._pool = pool

    @classmethod
    async def create(cls, num_connections=3):
        pool = await asyncpg.create_pool(
            dsn=my_conn_string,
            server_settings={"search_path": my_schema},
            min_size=num_connections,
            max_size=num_connections,
        )
        return cls(pool)

    async def fetch(self, sql, *args):
        async with self._pool.acquire() as connection:
            return await connection.fetch(sql, *args)

    async def fetch_as_dataframe(self, sql, *args):
        async with self._pool.acquire() as connection:
            stmt = await connection.prepare(sql)
            columns = [a.name for a in stmt.get_attributes()]
            data = await stmt.fetch(*args)
            return pd.DataFrame(data, columns=columns)

    async def fetchrow(self, sql, *args):
        async with self._pool.acquire() as connection:
            return await connection.fetchrow(sql, *args)

    async def execute(self, sql, *args):
        async with self._pool.acquire() as connection:
            return await connection.execute(sql, *args)

    async def copy_records_to_table(self, table, records):
        async with self._pool.acquire() as connection:
            return await connection.copy_records_to_table(table, records=records)

    async def close_pool(self):
        await self._pool.close()

tehmufifnman avatar Sep 07 '21 08:09 tehmufifnman

I have confirmed that I am nowhere near exhausting the number of connections available to my server

tehmufifnman avatar Sep 10 '21 22:09 tehmufifnman

Anyone have any thoughts? =)

tehmufifnman avatar Sep 22 '21 21:09 tehmufifnman

@elprans happen to have any clue?

tehmufifnman avatar Oct 24 '21 17:10 tehmufifnman

checking in again - any clue? =)

tehmufifnman avatar Nov 08 '21 00:11 tehmufifnman

It looks like new connections are timing out sometimes. Are you using the default timeout?

elprans avatar Nov 09 '21 03:11 elprans

I am using a timeout of 60, similar to the examples in documentation.

This randomly happens immediately after a create_pool call on a fresh execution of the service.

tehmufifnman avatar Nov 09 '21 22:11 tehmufifnman

create_pool attempts to create a minsize number of connections in parallel, so it's possible that some part of your infrastructure is throttling the connections. How big is num_connections? Is there anything in the Postgres log?

elprans avatar Nov 09 '21 22:11 elprans

num_connections is 300 - I had already ruled out DB issues =( nothing on the pg log me or my team could find

tehmufifnman avatar Nov 10 '21 01:11 tehmufifnman

Do you really need all 300 connections to be established right away? Try a smaller minsize.

elprans avatar Nov 10 '21 03:11 elprans

Sorry I thought you meant the num_connections config value from PG - my mistake

for the pool, my min/max is 3.

tehmufifnman avatar Nov 10 '21 04:11 tehmufifnman

the same error happening with me. when i try to connect to the database its create 6 to 7 connections and automatically stop and gives the error , sometimes its successfully create the pool without any error, my database connection limit is over 100, how do i fix it

0xgreenapple avatar Jul 23 '22 14:07 0xgreenapple