Investigate very high API reader memory usage
We're seeing more OOMs in v7 than we did in v6. We may need to profile the memory usage again to identity key areas for performance improvements and address them.
We should perform tests in the new clusters by reducing pod memory in staging and sending prod traffic into it so we can get a memory dump once it crashes
Is this still an issue? Did we fix it by switching to the slim docker image and upgrading nodejs version?
@zone117x Yeah this is still an issue unfortunately. Changing the LB algo to least_request helped cover the symptoms, but OOMs are still occurring frequently, just less so than before.