run CI in local container
currently tests run in container, but ideally we'd run go test with just the environment variables in local-ci.sh script, would avoid downloading everything
i'd love to see all the go code run from source rather than flow-api and snapshot worker be images.
+1 on running locally-locally, at least the test process for this change.
My current setup for running individual tests locally:
VS Code launch.json (doesn't mean others should use VS, just as an exhaustive list of configs for PG/MySQL->CH tests at least)
{
"version": "0.2.0",
"configurations": [
{
"name": "Launch test function",
"type": "go",
"request": "launch",
"mode": "test",
"program": "${workspaceFolder}/flow/e2e/clickhouse",
"args": [
"-test.v",
"-test.run",
"^TestPeerFlowE2ETestSuiteMySQL_CH$/^Test_MySQL_Time$"
],
"env": {
"AWS_REGION": "us-east-1",
"AWS_ENDPOINT_URL_S3": "http://host.docker.internal:9001",
"AWS_ACCESS_KEY_ID": "_peerdb_minioadmin",
"AWS_SECRET_ACCESS_KEY": "_peerdb_minioadmin",
"AWS_S3_BUCKET_NAME": "peerdbbucket",
"PEERDB_CATALOG_USER": "postgres",
"PEERDB_CATALOG_PASSWORD": "postgres",
"PEERDB_CATALOG_HOST": "host.docker.internal",
"PEERDB_CATALOG_PORT": "9901",
"PEERDB_CATALOG_DATABASE": "postgres",
"TZ": "UTC",
"CI_MYSQL_VERSION": "mysql-gtid",
}
}
]
}
Plus I host CH and MySQL in ad-hoc Docker containers with the right passwords and replace "localhost" with "host.docker.internal" in e2e/mysql.go and e2e/clickhouse/clickhouse.go so that the flow can find them outside the compose network.
A caveat I haven't dealt with yet, is that a lot of e2e_test_pgch_xxx databases aren't cleaned up, and pg replication slots stay active if the test is killed in the middle.
How does the following sound:
- Have the test infra read the configs from a yaml file instead of env
- Use
host.docker.internalas test peer hostname when not in CI (or modify/etc/hostsin CI and use that way everywhere, if that's possible) - Add cleanups for test replication slots and dbs from previous runs in the beginning (probably desirable to keep the data of the last run after the test finishes, just to inspect)
- For CH, CH cluster and MySQL, have the setup run a correct container if the port is not taken by a local setup, or just always do it with a custom port
- BQ, SF would probably stay in CI?
- It's on the developer to be running dev-peerdb.sh and invoking
go testwith what they want to test
Getting the services to run locally and integrating the test infra with IDE tooling are possible but should be separate changes, if we take them on.
ideally test could run on some ephemeral volume that can be kept running for debugging, then recreated for next test run