diskquota icon indicating copy to clipboard operation
diskquota copied to clipboard

Limit diskquota hash table's size according initial request

Open KnightMurloc opened this issue 1 year ago • 8 comments

Diskquota did not control the size of its hash tables in shared memory and could have consumed shared memory not intended for it, potentially impacting the other database subsystems. Hash tables can also grow indefinitely if the background process on the coordinator has not started, which can happen for a number of reasons: gone done with error, pause, isn’t started. In this case, data is not collected from segments and some hash tables (active_tables_map, relation_cache, relid_cache) would not be cleared and would overflow.

This patch adds a limit on the size of all hash tables in shared memory by adding a function that checks whether the hash table is full. The function returns HASH_FIND if the map is full and HASH_ENTER otherwise. It also report a warning if the table is full. Implemented a GUC that controls how frequently the warning will be reported, as it could be reported too frequently. Also added a GUC to control size of local reject map. The size of global reject map is set as diskquota_max_local_reject_entries * diskquota_max_monitored_databases.

The test_active_table_limit test has been changed. Firstly, the value of max_active_tables was changed from 2 to 5, since tables from all databases processed by diskquota are placed in active_tables_map and with a limit of 2 tables overflow occurs even when the extension is created. Secondly, now a table with 10 partitions is created to overflow active_tables_map, after which another table is created into which data is inserted that should exhaust the quota, but since this table does not inserted into active_tables_map, its size is not taken into account and we can insert into the table after that. At the end, vacuum full is done to achieve the overflow of altered_reloid_cache.

KnightMurloc avatar Jan 09 '24 07:01 KnightMurloc