synapse icon indicating copy to clipboard operation
synapse copied to clipboard

My HS has accumulated thousands of unreferenced state groups

Open richvdh opened this issue 7 years ago • 76 comments
trafficstars

... which are filling up my disk :(


To check if you are also affected, run this query:

select count(*) from state_groups sg
    left join event_to_state_groups esg on esg.state_group=sg.id
    left join state_group_edges e on e.prev_state_group=sg.id
where esg.state_group is null and e.prev_state_group is null;

if you see numbers in the thousands, then it is this issue. Otherwise, you're not affected by this issue.

richvdh avatar Jun 07 '18 00:06 richvdh

worse, when I go to purge history, the unreferenced state groups start turning into non-delta state groups, which makes the whole thing worse.

richvdh avatar Jun 07 '18 01:06 richvdh

redundant state groups:

create temporary table unreferenced_state_groups as 
select sg.id, sg.room_id from
    state_groups sg
    left join event_to_state_groups esg on esg.state_group=sg.id
    left join state_group_edges e on e.prev_state_group=sg.id
where esg.state_group is null and e.prev_state_group is null;

(empirically most of them seem to be coming from HQ)

richvdh avatar Jun 07 '18 09:06 richvdh

I seem to have them as well. I created that table on my system as well and get the following response:

synapse=# SELECT COUNT(*) FROM state_groups_to_drop;
 count
-------
  2272
(1 row)

Just to note: I have not run any purge commands yet

krombel avatar Jun 07 '18 17:06 krombel

#3625 might be related

richvdh avatar Jul 30 '18 18:07 richvdh

neil I don't think this can be a p2; it's a real blocker on cleaning up disk space

richvdh avatar Sep 03 '18 14:09 richvdh

I've done a little bit of digging into why this happens. Other than #3625, another cause (which probably bites matrix.org heavily, but others less so) is #3791.

richvdh avatar Sep 04 '18 22:09 richvdh

Another occasion that (I think) this happens is when we have a fork in the DAG, with different state on the two sides of the fork, and the next event (which heals the fork) is itself another state event. We create a new state group when we state-resolve the two sides for the fork (which is important for caching state res), but that SG is never actually (directly) used because we then create another SG to include the updated state.

richvdh avatar Sep 10 '18 11:09 richvdh

We have kind of a big disk filling database too ~(45G) and ~40 Users. We started to purge the history some time ago, monthly, so the db should contain only the data of the last 365 days with a sloop of 30 days. So I was curious have many tuples in state_groups_state would be affected in our database, so I extended your temporary table query a little bit:

create temporary table unreferenced_state_groups as 
select sg.id, sg.room_id, count(sgs.*) as cgs_cnt from
    state_groups sg
    left join event_to_state_groups esg on esg.state_group=sg.id
    left join state_group_edges e on e.prev_state_group=sg.id
    left join state_groups_state sgs on sgs.state_groups_state = sg.id 
where esg.state_group is null and e.prev_state_group is null
group by sg.id;
select sum(cgs_cnt) from unreferenced_state_groups;

Which resulted in 1.388.475 affected tupels, which is kind of nothing in contrast to 84.141.600 tupels in the table. So this Is definitely a thing, but my guess is that we have other waste in that database, or is this a "normal/to be expected" size?

sargon avatar Sep 16 '18 12:09 sargon

we’re investigating this with https://github.com/erikjohnston/rust-synapse-compress-state (not yet ready for use) which looks to also provide decent size savings in general.

ara4n avatar Sep 16 '18 14:09 ara4n

I see ...

Number of entries: 345193
Number of rows: 70955217
Number of rows compressed: 2375250

Okay. If we assume we had those optimizations, we would have roughly 16.000.000 tuples in that table, then the affected tuples above get a weight.

sargon avatar Sep 16 '18 17:09 sargon

right, so that sounds it can be shrunk by 30x if i’m reading it correctly and if the tool is correct. hopefully will have something that can apply the change in the near future.

ara4n avatar Sep 16 '18 17:09 ara4n

@ara4n: please can we keep chat about how we might better compress state groups (once they exist) out of this bug, which is about state groups which shouldn't really exist in the first place? Indeed, if we apply the optimisations suggested in erik's script without addressing this bug, the problem will get even worse, because we'll have even more orphaned state groups which need de-deltaing.

richvdh avatar Sep 17 '18 11:09 richvdh

@sargon:

Which resulted in 1.388.475 affected tupels, which is kind of nothing in contrast to 84.141.600 tupels in the table.

Those 1.3M tuples will just be the deltas from the previous state groups - probably only one or two rows per state group. The problem comes when a state group is removed, which means that any other state group which references it will have to be converted from delta storage to a absolutes - ie, we will have to store every single state event for the room for each of those state groups.

Suppose we have three state groups in a room, 1, 2, and 3. 1 is the first state group, and 2 and 3 are both stored as deltas from 1:

  1
 /  \
2    3

SG1 and SG3 are both used for a number of events in the room, but as per this bug, SG2 is unused. Now we purge some events from this room. SG1 and SG3 are detected as unused and deleted. However, SG2 is losing its parent, so needs "de-deltaing".

Multiply this effect by 1.3M, and you have a real problem.

richvdh avatar Sep 17 '18 11:09 richvdh

hi,

i believe i'm facing the same problem described a year earlier in this issue: the whole database weigh 14 GB (7 users registered only, no huge rooms joined...)

# SELECT pg_size_pretty( pg_database_size('matrix_prod') );
 pg_size_pretty 
----------------
 14 GB
(1 row)

here are the biggest tables:

matrix_prod=# select schemaname as table_schema,
    relname as table_name,
    pg_size_pretty(pg_total_relation_size(relid)) as total_size,
    pg_size_pretty(pg_relation_size(relid)) as data_size,
    pg_size_pretty(pg_total_relation_size(relid) - pg_relation_size(relid))
      as external_size
from pg_catalog.pg_statio_user_tables
order by pg_total_relation_size(relid) desc,
         pg_relation_size(relid) desc
limit 10;
 table_schema |        table_name         | total_size | data_size | external_size 
--------------+---------------------------+------------+-----------+---------------
 public       | state_groups_state        | 4724 MB    | 3134 MB   | 1590 MB
 public       | event_json                | 2857 MB    | 2502 MB   | 354 MB
 public       | received_transactions     | 1221 MB    | 697 MB    | 524 MB
 public       | stream_ordering_to_exterm | 1193 MB    | 672 MB    | 520 MB
 public       | event_auth                | 907 MB     | 633 MB    | 274 MB
 public       | events                    | 811 MB     | 366 MB    | 445 MB
 public       | event_edges               | 746 MB     | 261 MB    | 485 MB
 public       | room_memberships          | 527 MB     | 284 MB    | 243 MB
 public       | event_reference_hashes    | 429 MB     | 200 MB    | 229 MB
 public       | state_events              | 312 MB     | 221 MB    | 91 MB
(10 rows)

isn't there something to do? it's labeled P1 and i think truly critical.

-- edit 10 days later DB weight is now 16 GB 😩

ghost avatar Jun 27 '19 23:06 ghost

Coming back to this topic. We hit the magical ~100GB table size last week. I got it under control with the compress-state applied on every room, which took us only 4 days. After an vacuum full the database size shrinked down to something around 22GB (only the state_group_state table). To my knowledge synapse has been patched to remove the loose ends during history purching, so that is contained. But ...

I just run the queries from above and they still find unreferenced state groups (~10k), since my knowledge about the database schema is kind of nearly none existence, can you please provide us with a query to safely getting rid of those rows.

sargon avatar Nov 08 '19 09:11 sargon

There are rooms that are hard to compress, the following room still has 35M rows after compressed:

Fetching state from DB for room '!QtykxKocfZaZOUrTwp:matrix.org'...
  [12m] 49667078 rows retrieved
Got initial state from database. Checking for any missing state groups...
No missing state groups
Number of state groups: 10136
Number of rows in current table: 49665768
Compressing state...
[00:02:29] ____________________ 10136/10136 state groups
Number of rows after compression: 35295076 (71.07%)
Compression Statistics:
  Number of forced resets due to lacking prev: 51
  Number of compressed rows caused by the above: 319150
  Number of state groups changed: 6546

mytbk avatar Mar 19 '20 15:03 mytbk

(Sidenote: irc bridged rooms are far beyond MatrixHQ now, with m.room.member events all over the place.)

grinapo avatar Mar 20 '20 17:03 grinapo

Another factor in this is that, as of #6320, we now create a new state group for any new state event which is submitted via the C-S API, even if that event is not accepted.

richvdh avatar May 01 '20 14:05 richvdh

The long and the short of this is that I think we need a script which will gradually walk the state_groups table, looking for redundant state groups and removing them.

(it would also be nice to stop some of the state groups being created in the first place, but that's a bit harder.)

richvdh avatar May 01 '20 15:05 richvdh

we will hope to revisit this in the future as part of a larger piece of work on the database

richvdh avatar May 11 '20 13:05 richvdh

I have same problem in our public homeserver ru-matrix.org with about 20 local users - after 2 years of working got total size of pgsql table is 123GB! Executing rust-synapse-compress-state on most large rooms (Matrix HQ, Riot-*, KDE, etc) don't help dramatically reduce size of state_groups_state table, it still large.

Here is current per-table statistics:

relation total_size rows
state_groups_state 51 GB 274929152
event_json 22 GB 14572037
events 8853 MB 14609418
event_edges 8477 MB 15573624
event_reference_hashes 4519 MB 14520730
stream_ordering_to_exterm 4340 MB 2154655
event_auth 3719 MB 17286570
event_search 3637 MB 8290099
received_transactions 2815 MB 9915562
event_to_state_groups 2555 MB 11454427
room_memberships 2102 MB 5461632
current_state_delta_stream 1306 MB 6627053
state_events 1232 MB 5625349
current_state_events 958 MB 1272631
cache_invalidation_stream 850 MB 4414804
receipts_linearized 794 MB 249685
presence_stream 771 MB 190234
state_groups 604 MB 2960779
event_forward_extremities 347 MB 2129
state_group_edges 337 MB 3225766

MurzNN avatar May 12 '20 14:05 MurzNN

Same thing here.

intelfx.name is a single user HS, but I run my own IRC bridge connecting to several high volume channels. My state_groups_state has upwards of 100M rows, and running the compression tool on all rooms produces ~2G of sql output. Woah.

intelfx avatar May 16 '20 12:05 intelfx

running the compression tool on all rooms produces ~2G of sql output.

How did you run script on all rooms? I see that it accept only one room per argument, and no bulk mode.

Also how much rows stay in table after compression finished?

MurzNN avatar May 17 '20 05:05 MurzNN

@intelfx also can you share Prometheus performance stat of your homeserver - total CPU usage and per background job? I have problem that Synapse eats 100% of CPU all the time from last month, and can't understand, is this related to current issue, or separate issue?

MurzNN avatar May 17 '20 06:05 MurzNN

@MurzNN

How did you run script on all rooms? I see that it accept only one room per argument, and no bulk mode.

You're correct. I just hacked it up:

-- get_candidate_rooms.sql
SELECT room_id
FROM (
        SELECT room_id, count(*) AS count
        FROM state_groups_state
        GROUP BY room_id
        ORDER BY count DESC
) AS groups
WHERE count > 100;
$ psql synapse -Aqt -f get_candidate_rooms.sql >rooms

$ parallel --bar 'id={}; target/release/synapse-compress-state -p "host=/run/postgresql user=synapse dbname=synapse" -r "$id" -t -o "out.$id.sql"' <rooms
(lots of output which I did not keep)

$ parallel --bar 'f={}; psql synapse -f "$f" -o /dev/null && rm -f "$f"' ::: out.*.sql

Note that the tool is massively multi-threaded and CPU heavy. On my 16-core box it took almost an hour to process ~150 rooms.

Also I used GNU parallel because the initial data loading is single-threaded and I had RAM to spare, but you might be better off doing it sequentially.

Also how much rows stay in table after compression finished?

19M rows in state_group_state as of now.

intelfx avatar May 17 '20 06:05 intelfx

@MurzNN

also can you share Prometheus performance stat of your homeserver - total CPU usage and per background job? I have problem that Synapse eats 100% of CPU all the time from last month, and can't understand, is this related to current issue, or separate issue?

Well, here it is: https://intelfx.name/grafana/d/000000012/synapse

I've been observing synapse processes regularly spiking to 100% CPU over the last month or so. Thing is, I've just switched to PyPy, so you won't see how much impact has the state group cleanup actually made. Another thing is I still haven't learned to properly visualize counters and rates with Prometheus/Grafana, so you won't find those 100% peaks on my dashboard.

So, I'm not sure if that data is any useful.

intelfx avatar May 17 '20 14:05 intelfx

this issue is unrelated to rust-synapse-compress-state; I've hidden a lot of off-topic comments about it.

richvdh avatar May 19 '20 09:05 richvdh

https://github.com/erikjohnston/synapse-find-unreferenced-state-groups is an old tool I wrote for finding unreferenced state groups, though you have to be very careful if running against a live synapse

erikjohnston avatar Jun 04 '20 10:06 erikjohnston

https://github.com/erikjohnston/synapse-find-unreferenced-state-groups is an old tool I wrote for finding unreferenced state groups, though you have to be very careful if running against a live synapse

Thanks for tool! Can we safely use it now (without fear to broke something) on fresh Synapse version, that is in stopped state?

MurzNN avatar Jun 04 '20 10:06 MurzNN

I believe it should be safe if Synapse is not running.

erikjohnston avatar Jun 04 '20 10:06 erikjohnston