dagster icon indicating copy to clipboard operation
dagster copied to clipboard

Dagster Postgres shows increased load

Open matthias-Q opened this issue 10 months ago • 1 comments

Dagster version

1.7.2

What's the issue?

After upgrading to Dagster 1.7.2 we noticed increased load on the postgres database. This leads to timeout and lags in the frontend.

image

What did you expect to happen?

Fast response times and normal operational load like in the previous versions

How to reproduce?

No sure

Deployment type

Other

Deployment details

We are running dagster in ECS/Fargate and Postgres as a RDS instance

Additional information

No response

Message from the maintainers

Impacted by this issue? Give it a 👍! We factor engagement into prioritization.

matthias-Q avatar Apr 24 '24 13:04 matthias-Q

I am trying to find the root cause. The logs are not suspicious. They show some timeouts, but thats just because the DB has a high IO load. This is what:

SELECT
    *
FROM
    pg_statio_user_tables;

returns:

relid|schemaname|relname                       |heap_blks_read|heap_blks_hit|idx_blks_read|idx_blks_hit|toast_blks_read|toast_blks_hit|tidx_blks_read|tidx_blks_hit|
-----+----------+------------------------------+--------------+-------------+-------------+------------+---------------+--------------+--------------+-------------+
44424|public    |pending_steps                 |             0|            0|          216|        2295|              0|             0|             0|            0|
16460|public    |runs                          |        182784|     30826938|        28316|   109979645|          37406|       1568423|          1597|      3933090|
16516|public    |asset_keys                    |          5454|     38494342|          757|    21746971|            148|        424659|            38|       761821|
16539|public    |asset_event_tags              |         31473|       170285|        47013|      509394|              0|             0|             0|            0|
16555|public    |jobs                          |          9962|      9736603|          455|     6141862|              0|             0|             0|            0|
16583|public    |job_ticks                     |       3684634|   3470423073|      5073563|   335860867|             63|          1011|             9|         1031|
44435|public    |asset_check_executions        |             0|            0|            0|           0|              0|             0|             0|            0|
16496|public    |alembic_version               |             1|           23|            2|          17|               |              |              |             |
44402|public    |asset_daemon_asset_evaluations|     706935248|    535562840|        45411|     9730309|            418|          1181|            99|          875|
44414|public    |concurrency_slots             |             0|            0|            7|         187|              0|             0|             0|            0|
16416|public    |snapshots                     |          4078|       107544|         2941|      320093|           3945|         12561|           970|        17809|
16453|public    |kvs                           |          1264|     13882238|          238|     5910581|          24988|      42229431|           339|     10340129|
16569|public    |instigators                   |          2804|     42981206|          419|    31955612|              0|             0|             0|            0|
16528|public    |dynamic_partitions            |           517|        72381|          584|       60959|              0|             0|             0|            0|
16448|public    |instance_info                 |           191|       176981|          175|      119133|              0|             0|             0|            0|
16434|public    |bulk_actions                  |           154|       134958|          603|    12687122|             72|         12614|            20|        10072|
16426|public    |daemon_heartbeats             |          1378|     22333930|          237|    13688550|             43|          2238|             6|         2019|
16482|public    |run_tags                      |        137272|      8452773|        37721|      781590|              0|             0|             0|            0|
16502|public    |event_logs                    |      15061757|    221590786|      2693126|   981757487|         142356|        560068|          1505|       475894|
16404|public    |secondary_indexes             |            45|      8621166|          132|    25764533|              0|             0|             0|            0|

pg_stat_activity also does not show suspicious queries. No old queries or idle_in_transaction.

Postgres runs on version 14. We are pinning dagster to 1.7.1 for now and check if that helps.

matthias-Q avatar Apr 25 '24 08:04 matthias-Q

Looks like the issue is the table asset_daemon_asset_evaluations. I have switched off auto materialization for the moment and the constant read IO ops is gone. This table is 1.2G large btw. I have no clue why. The daemon tried to delete old evaluations but seemed to be stuck

matthias-Q avatar Apr 29 '24 10:04 matthias-Q

So, something happened to this table on the 19th, when dagster 1.7.2 was released:

select date(create_timestamp), count(*)
from public.asset_daemon_asset_evaluations   
group by 1

returns:

date      |count |
----------+------+
2024-03-30|    17|
2024-03-31|   231|
2024-04-01|   203|
2024-04-02|   200|
2024-04-03|   199|
2024-04-04|   201|
2024-04-05|   199|
2024-04-06|   146|
2024-04-07|   156|
2024-04-08|   196|
2024-04-09|    94|
2024-04-10|   270|
2024-04-11|   258|
2024-04-12|   260|
2024-04-13|   279|
2024-04-14|   245|
2024-04-15|   274|
2024-04-16|   260|
2024-04-17|   287|
2024-04-18|   259|
2024-04-19|135770|
2024-04-20|274080|
2024-04-21|141408|
2024-04-22|110208|
2024-04-23| 96576|
2024-04-24| 82752|
2024-04-25| 18875|
2024-04-26|   163|
2024-04-27|   425|
2024-04-28|   262|
2024-04-29|    71|

matthias-Q avatar Apr 29 '24 11:04 matthias-Q

I was digging a bit in (maybe related) commits between 1.7.1 -> 1.7.2 and I stumbled over this: https://github.com/dagster-io/dagster/commit/e4f394e93e79426331cc4f0feac709a0447a67e1 (referring to this line: https://github.com/dagster-io/dagster/blob/7b103db9438bb4e5018c7ae67e845982a9a376ac/python_modules/dagster/dagster/_core/definitions/asset_daemon_context.py#L281 and following)

@OwenKephart You have swappend replace with copy. Can this lead to this issue here, where the daemon creates way more entries in the table job_ticks and asset_daemon_asset_evaluations?

This is a real issue for us, because we rely on automaterializations. With them switched on, we have many read-io ops on the postgres db.

matthias-Q avatar Apr 29 '24 13:04 matthias-Q

Hi @matthias-Q -- apologies for not seeing this sooner, and I think you've found the exact problem. Luckily, there's already a fix in place for this: https://github.com/dagster-io/dagster/pull/21310.

Upgrading to the latest version of dagster (1.7.3) should resolve this issue, but feel free to reopen this if that is not the case.

OwenKephart avatar Apr 29 '24 21:04 OwenKephart

Hey @OwenKephart , thanks for the response. It looks like it has no effect. I upgraded to 1.7.3 and switched on the automaterialization. The read io goes up again.

Could it be that we have too many rows in some tables? job_ticks and asset_daemon_asset_evaluations? Since there is no index on the create_timestamp column, this can lead to a lot of table scans? Would it be safe to delete rows where the create_timestamp is older that 3 days?

I will reopen this for now.

[EDIT] looks like I am not allowed to reopen this issue.

matthias-Q avatar Apr 30 '24 06:04 matthias-Q

Hi @matthias-Q -- I think your hypothesis re: lacking an index on create_timestamp makes sense (and so even though there are no longer tons of new evaluations coming in, the old evaluations are slowing things down).

It's definitely safe to delete older records, and I'd encourage that option here. Leaving this issue open for now in case that doesn't resolve.

OwenKephart avatar Apr 30 '24 12:04 OwenKephart

So in asset_daemon_asset_evaluations we have less entries after the upgrade to 1.7.3 (and also during the downgrade to 1.7.1). I have removed these rows.

For job_ticks the row count is still higher than before (but not as severe as for asset_daemon_asset_evaluations)

select date(create_timestamp), count(*) 
from public.job_ticks 
where create_timestamp > '2024-04-15'
group by 1
order by 1
date      |count|
----------+-----+
2024-04-15|   69|
2024-04-16|   53|
2024-04-17|   56|
2024-04-18|   48|
2024-04-19|   59|
2024-04-20|   63|
2024-04-21|   62|
2024-04-22|   64|
2024-04-23| 1717|
2024-04-24| 3378|
2024-04-25| 3397|
2024-04-26| 3945|
2024-04-27| 3547|
2024-04-28| 3568|
2024-04-29| 2525|
2024-04-30| 1602|

So I assume that there is still something going on.

I took a look into job_ticks

select date(create_timestamp),type, count(*) 
from public.job_ticks jt
group by 1,2

----
date      |type            |count|
----------+----------------+-----+
2024-04-30|AUTO_MATERIALIZE|  384|
2024-04-28|SCHEDULE        |   16|
2024-04-29|SENSOR          | 1715|
2024-04-28|AUTO_MATERIALIZE|  803|
2024-04-27|AUTO_MATERIALIZE|  792|
2024-04-30|SCHEDULE        |    8|
2024-04-29|AUTO_MATERIALIZE|  794|
2024-04-27|SENSOR          | 2738|
2024-04-29|SCHEDULE        |   16|
2024-04-27|SCHEDULE        |   17|
2024-04-30|SENSOR          | 1225|
2024-04-28|SENSOR          | 2749|

Looks like Sensors causing many of these ticks.

Turns out it is just one specific sensor with specific a parent id. This one is the sensor that checks job failures and posts it in a MS Teams channel.

matthias-Q avatar Apr 30 '24 14:04 matthias-Q

The issue is resolved. I have run vacuum full verbose analyze asset_daemon_asset_evaluations, which drastically reduced the number of pages from 180k to 1k. Now I have very little IO reads.

matthias-Q avatar May 14 '24 11:05 matthias-Q