Tolu Aina
Tolu Aina
Any update on running off master. My guess is that with about 45 M records in your database, you still have pending items to go through in the replication slot.
- I think there is something else at play here and I have not managed to reproduce it. - Essentially data goes from pgxlogs -> redis -> es - The...
@camirus27 @tthanh I feel this issue is different to the original one. Can you please create a separate issue to with as much details as you can to enable me...
> I'm reading the code of `pgsync`, and not fully understand yet, but in the `pgsync/sync.py`: > > ``` > with Timer(): > for document in json.load(open(config)): > sync =...
> I'm testing pgsync with `2.1.10`and found the new logs you mentioned: > > ``` > opensearch-redis | 1:M 12 Jan 2022 13:30:07.238 * 1 changes in 3600 seconds. Saving......
@jinserk might be worth re runing bootstrap and then restart pgsync? ``` - bootstrap -t schema.json - bootstrap schema.json - pgsync schema.json ```
I believe you are running into the OOM killer. The resource requirements totally depends on your data and structure. Do you have all services running under the VM i.e Postgres,...
The initial sync is always going to require resources proportional to your data size and structure. This is a one off. I would suggest allocating as much resources needed to...
I'm seeing a few seq scans in that query analysis. Did you already run `pgsync -c schema.json -a` to see if there are any missing indices?
- This is as a result of the query plan. - Initially I had a query plan similar to yours with lots of sequential scans. - Then I created the...