[ELE-53] Surface warehouse size alongside execution time
As a user I might run a specific dbt model using differently sized snowflake warehouses.
This greatly affects the execution time of a given model. That means that when I look at the execution time of a specific model over a number of runs, it can appear quite irregular.
In an ideal world that data could then be used to filter on as part of the report.
Thanks @OliverRamsay! I actually experimented a bit with doing that (https://github.com/elementary-data/dbt-data-reliability/pull/36) and we ended up not including it.
The good news is that it's possible, just challenging. We need to implement for each platform, and we are not sure about the performance impact.
Is that something you are open to contributing and testing on your env (just for Snowflake and just for the dbt package)? The code in my PR is like 80% there I believe
This issue seems to relate to https://github.com/elementary-data/elementary/issues/406.