apex-trigger-actions-framework
apex-trigger-actions-framework copied to clipboard
Removed SOQL query to avoid query rows consumption
In Spring 21, Salesforce introduced the possibility to access custom metadata type records without consuming query rows of the well-known 50.000 query rows limit on SOQL queries. Andrew Fawcett shared some thoughts on this topic here as well: https://github.com/SFDO-Community/declarative-lookup-rollup-summaries/issues/1049
This pull request replaces the query on Trigger_Action__mdt by using the getAll() method.
The benefits: Less consumption of query rows. All trigger actions are accessed only once. The downsides: Before adjustment, using the query and its where-clause was already optimized to return as much rows as needed. No need to filter in the for loop over all trigger actions, no need to add logic for sorting the trigger actions. I think that this might take some milliseconds more of the available Apex CPU time.
The intention of this pull request is to analyze and discuss the benefits and downsides. The amount of saved query rows is there, but maybe this is not that important with regard to the Apex CPU time.
Please do not merge, until it is clear whether this adjustment is a good idea or not.
- [X] Tests pass: Tests passed in scratch org
- [X] Appropriate changes to README are included in PR: No need
This looks good, saving a SOQL query. Want to add to this, might also be good to look into caching (Platform cache) this data, as it's not expected to change often.
Thanks for raising this feature @benjaminloerincz! It's definitely nice to reduce the number of query row limits that are consumed per transaction. The proposed setup reminds me a lot of Aiden Harding's Nebula Trigger framework.
In the current setup, there is a one-time overhead of about 7-9 ms per sObject+context per transaction as we query for each unique sObject+context throughout a transaction's lifecycle. This works well for simple triggers, but if you have multiple cascading DML operations across multiple sObjects in a given trigger transaction, the performance could be slightly concerning. I also think that the odds of using all 50,000 query rows in a trigger context is pretty small - you probably need to rearchitect your trigger logic if you hit that limit.
In the proposed setup, we would gain some additional query row limits per transaction, but there could be a very large number of Trigger Actions within a given org; fetching all of those rows and ordering them all for every transaction will use a bit more memory usage and CPU time for each transaction. This increase in the compute time would be applicable for every transaction - not just the heavily used sObjects (Case, Opportunity, etc.). This overhead would increase linearly as the org grows and new featured get added.
Lawrence Newcombe did an excellent write up of the performance implications of the different approaches in this article. Generally speaking, I really don't have any strong feelings about one way vs the other - both approaches have their pros and cons.
Regarding @thvd's suggestion of using the Platform Cache - I like the idea and have explored it myself. Unfortunately, I would not recommend using the cache unless you have a way to automatically clear the cached values with each deployment. This is possible with sfdx and an automated delivery pipeline using Jenkins, GitHub Actions, or similar; unfortunately, most teams do not have that mature of a pipeline. Also, if you are using a third party DevOps product such as AutoRabbit, Gearset, or Copado, this is not possible to implement programmatically. Without this automated refresh of the cache, you must add a manual post deployment step to your delivery process or wait for the cache to expire and reload for your trigger logic to execute how you want.
Closing because this is very old; I don't think we will be incorporating this into the framework at this point.