Dependencies: Stop bundling client libraries
When installing some of our community modules through extras, we usually bundle a client library for interacting with the service provided from the container. In some cases this is nice, as there is only a single, official client library available for that service (E.g. qdrant-client).
However, in many cases there is no single client library that makes for the best choice. Some examples are sqlalchemy in combination with other drivers for connecting to databses, where you have the choice between multiple drivers (such as pyodbc and pymssql for SQL Server, pg8000 and psycopg for Postgres and so on). Many use cases don't even use sqlalchemy for their database connection, and we are thus forcing an install of one or more useless packages in their builds.
Therefore I suggest that we remove all dependencies for client libraries in our extras and put the responsibility for connecting to the containers with their chosen framework on the users of tc-python. We will still keep client libraries as part of our dev dependencies, so that we can provide usage examples in our doctests and other documentation, but simply installing testcontainers-python with some extras will no longer give you a large bundle of dependencies you might not need.
Where possible, we will still include and maintain methods such as get_connection_string (and variants, where we should probably standardize on a naming convention for consistency) for easy usage with client libraries. Example usage with client libraries also become more important, but in those examples we always show imports of the client library used, so it should be quite self-explanatory that you need a client to connect to a service.
This work has already been partly started, and contain useful discussion around the issue. See here for the work done on removing sqlalchemy from Postgres.
I had an approach for the recently added nats module, ultimately we just included the standard client. I'll throw it out as an idea. Namely if you want to run the test using the semi-canonical client it works.
NO_NATS_CLIENT = True
try:
from nats import connect as nats_connect
from nats.aio.client import Client as NATSClient
NO_NATS_CLIENT = False
except ImportError:
pass
@pytest.mark.skipif(NO_NATS_CLIENT, reason="No NATS Client Available")
async def test_pubsub():
... elided ...
Couple other thoughts
- One thing I see the Postgres stuff doing is
execing into the containers and running commands. It isn't obvious all the services have this sort of ability, and if they do if the required libraries are included in their respective containers. - How do you test that the connstring is correct?
- Maybe folks have better tooling and dev workflows than me, but I think a contributor would be less likely to have good code coverage if they are implemented as doc tests.
I had an approach for the recently added
natsmodule, ultimately we just included the standard client. I'll throw it out as an idea. Namely if you want to run the test using the semi-canonical client it works.NO_NATS_CLIENT = True try: from nats import connect as nats_connect from nats.aio.client import Client as NATSClient NO_NATS_CLIENT = False except ImportError: pass @pytest.mark.skipif(NO_NATS_CLIENT, reason="No NATS Client Available") async def test_pubsub(): ... elided ...
I still think we should include the client libraries as dev dependencies, to make it easier to test in general and to test that common usage patterns are working as intended. However, all dev dependencies should be optional, so that they are not installed automatically when installing the testcontainers package itself.
Couple other thoughts
- One thing I see the Postgres stuff doing is
execing into the containers and running commands. It isn't obvious all the services have this sort of ability, and if they do if the required libraries are included in their respective containers.
Yupp, and if the images come bundled with tools intended for readiness checking I think we can utilize that more to actually perform readiness checking. Parsing logs are okay, but not always ideal, especially if there is no message where it is abundantly clear that the message is meant to indicate that the container is ready.
- How do you test that the connstring is correct?
I think we misunderstood each other a little here. I still would like to include client libraries as dev dependencies, so that they can be used for tests.
- Maybe folks have better tooling and dev workflows than me, but I think a contributor would be less likely to have good code coverage if they are implemented as doc tests.
Agreed. doctests in my opinion is only for showing usage of the classes/modules, and the doctests themselves are only there to ensure that the example actually works. With little syntax highlighting and help from your editor, the doctests often end up invalid, and the doctests help prevent that
On my machine (😅) the removal of the client library for testing the connection to the container makes the tests unreliable/broken.
I have to very simple fixtures:
@pytest.fixture(scope="module")
def db_container():
postgres = PostgresContainer("postgres:14.7")
# we set the time zone to equal to the prod system
postgres.with_env("TZ", "Europe/Berlin")
postgres.with_env("PGTZ", "Europe/Berlin")
with postgres:
logger.info("setup db container")
yield postgres
logger.info("teardown db container")
@pytest.fixture(scope="module")
def db_engine(db_container: PostgresContainer):
time.sleep(5) # TODO why is this a problem?
engine = create_engine(db_container.get_connection_url())
setup_events()
Base.metadata.create_all(engine)
return engine
the tests fail almost 100% of the time without the sleep in the second fixture. They also don't fail if I set a breakpoint (i.e. doing the sleep manually).
Since the script in the PostgresContainer._connect function is entirely run inside the container, I assume that on my setup it takes more time than usual for the network to ready. So running things in the container works before it can be accessed externally.
I guess the best workaround is to have my own subclass that uses sqlalchemy in its own _connect implementation.
@tharwan what version are you on? latest version changed back to pg_isready over watching for logs due to issues with localization. previously also there was an issue where the python implementation was watching for the ready message on both stdout and stderr, we changed it then to look for it on both (container will log first to stderr during setup step then logs to stdout on regular startup - the bug was that it accepted the one during setup and then tests would fail because it hadn't started up after setting up - the fix was to add an argument to watch for BOTH vs OR).
as a side note, in the setup method i usually run migrations - so that the database is confirmed ready before tests begin, like spring boot does for example
@alexanderankin you are correct I was not on latest, but I assumed I was. I ran 4.5.1.
But I am getting the same with 4.8.2
here is my reproducer, trying without sleep now
steps
# mkdir tharwan-pg-issue
# cd !$
# python -m venv .venv && . .venv/bin/activate
# pip install pytest testcontainers sqlalchemy psycopg2-binary
import time
from logging import getLogger
import pytest
from sqlalchemy import Connection, Engine, create_engine
from sqlalchemy.sql import text
from testcontainers.postgres import PostgresContainer
logger = getLogger("test")
@pytest.fixture(scope="module")
def db_container():
postgres = PostgresContainer("postgres:16-alpine")
# we set the time zone to equal to the prod system
postgres.with_env("TZ", "Europe/Berlin")
postgres.with_env("PGTZ", "Europe/Berlin")
with postgres:
logger.info("setup db container")
yield postgres
logger.info("teardown db container")
@pytest.fixture(scope="module")
def db_engine(db_container: PostgresContainer) -> Engine:
time.sleep(5) # TODO why is this a problem?
engine = create_engine(db_container.get_connection_url())
# setup_events()
# Base.metadata.create_all(engine)
return engine
def test_it(db_engine: Engine):
connection: Connection = db_engine.engine.connect()
connection.execute(text("create table if not exists example(id serial not null primary key, name varchar(150) unique);"))
connection.execute(text("insert into example(name) values ('hello'), ('world');"))
rows = connection.execute(text("select * from example"))
rows = list(rows)
assert len(rows) == 2
for each_row in rows:
print(each_row)
okay so i inserted sleep statements into this file
here - https://github.com/docker-library/postgres/blob/0f80628c99c56c96db59a6c774db4fca4b27f5cd/docker-entrypoint.sh#L334
and here - https://github.com/docker-library/postgres/blob/0f80628c99c56c96db59a6c774db4fca4b27f5cd/docker-entrypoint.sh#L351
seems like pg_isready suffers from the same as the thing - and the logs fix that was implemented for people using postgres variants and other locales breaks postgres startup again.
but we have already implemented the fix for pg_isready - we pass the --host argument to force it to talk over tcp. the temporary server only listens on sockets. so this distinguishes the first and second phases of startup.
so idk, this is all my hypotheses tested and no new changes seem to be necessary/possible