storage/sources/postgres: test snapshotting a very large PG database
Feature request
Can PG sources snapshot a 10gb, 100gb, 500gb, 1TB database in any amount of time?
And then, additionally, can the snapshot complete if we introduce a failure part-way through?
cc @MaterializeInc/qa
@sploiselle I think we have similar tests already and we only need to increase the amount of generated data. Can we move this to the storage team or do you think we need additional framework support? cc @def-
storage-usage could probably be adapted for this, it has a PG_CDC setup already, so there could be a test with a larger amount of data in PG.
Alternatively but a bit less explicit:bin/mzcompose --find zippy --scenario=PostgresCdc, --actions=10000, --max-execution-time=55m
Oh sorry--I didn't realize the implications of my tags. This does not need to be on the QA team and I believe the storage team should be able to take of this.
The work in #26267 would probably help out here.