azure-functions-python-worker icon indicating copy to clipboard operation
azure-functions-python-worker copied to clipboard

Function app memory increase on each new invocation

Open m3ck0 opened this issue 2 years ago • 6 comments

Is your question related to a specific version? If so, please specify: 4

What binding does your question apply to, if any? Event Hub Binding

Question

Hello, currently we have cloud function v4 using Python3.9 runtime which is triggered by the event hub binding and is intended for lightweight ETL.

The problem we're facing is that, the memory consumption is being increased from invocation to invocation by several kilobytes (or megabytes in some cases). The function code itself is being profiled using memory_profile (psutil & tracemalloc) and both report in the traces that memory is increasing, but there is no decrease.

We've tried some test refactoring to:

  • manually close all connections (even though we think that it's not good idea for connection caching)
  • manually deleting (by del interface) objects which report memory increase during application code execution
  • set process and thread count to 1

But the final image stays the same, during the day the function app memory consumption may rise from 300MB to 1-3GB, as long as app service plan allows it.

During the profiling, we've noticed that in the code executed by the entry point memory change (increase / decrease) is "good" - no additional / misc memory allocations does not happen, but on each new invocation function starts with the increased memory which is outside of entry point.

Please suggest / help with additional debugging steps to resolve this memory related issue or gather more information which would be helpful to fix the problem? If additional information would be required, I'd be happy to share!

m3ck0 avatar Oct 03 '22 04:10 m3ck0

@m3ck0 please use the below link for more information, https://stackoverflow.com/questions/25823708/memory-occupied-by-my-application-keeps-increasing-with-time If the issue is not resolved, please provide us the function app name, invocation ID and timestamp details so that we can investigate the issue further.

ramya894 avatar Oct 06 '22 11:10 ramya894

This issue has been automatically marked as stale because it has been marked as requiring author feedback but has not had any activity for 4 days. It will be closed if no further activity occurs within 3 days of this comment.

ghost avatar Oct 10 '22 12:10 ghost

Hi @m3ck0 -

  1. Did you profile your memory usage with core tools, locally? Instructions on how to profile memory are here.
  2. Can you share your function app name? If you want to share it privately, please use this to share app name privately (invocation id).

vrdmr avatar Oct 10 '22 15:10 vrdmr

Hi, sorry for late reply due to the holidays.

Thanks @vrdmr - I've tried the 1st one and from that I got the results I'm describing in the original post. I'm going to send the feedback to the team using the 2nd (thanks).

@ramya894 thanks for the link, we have similar static methods which returns connection, but as I've mentioned we've implemented manual object deletion (w/o success).

This are the details of the function invocation for the azure team to investigate

  • 2022-10-12T02:42:16.380 [Information] 2022-10-12 02:42:16,379 [INFO] azure_functions_worker: Received FunctionInvocationRequest, request ID: 758ded2c-e9c9-4546-8c70-73be7ffce73a, function ID: d5a80c5f-c61b-4d6d-8110-e9a86a81375d, function name: us*****nts, invocation ID: bdcacea1-95ea-4f4c-9cbe-d173203d7af8, function type: sync, sync threadpool max workers: 1
  • region: East US

m3ck0 avatar Oct 12 '22 02:10 m3ck0

Hi @m3ck0, I checked the memory usage of your function app. I do see high memory usage over the past 3 days. I couldn't find any evidence that the python worker is leaking memory from our tests. This needs some more investigation. Can you create a support ticket so that we can get some more information on your function app? https://learn.microsoft.com/en-us/azure/azure-portal/supportability/how-to-create-azure-support-request

gavin-aguiar avatar Oct 12 '22 18:10 gavin-aguiar

Is there any update on this ? We seem to have the same issue.

AL1027015 avatar Oct 17 '22 09:10 AL1027015

Same here. Any update on this?

Stropek avatar Oct 25 '22 21:10 Stropek

Hi, no updates yet, still processing internally to create the support ticket on Azure. @Stropek @AL1027015 welcome to share any news here if you got any

m3ck0 avatar Oct 26 '22 09:10 m3ck0

Just leaving progress note here for others if it's helpful. Seems like the issue is related to the logging library we're using in the project (python logzio), they have an special section for the serverless deployment, the implementation aims to flush the loggers, but so far it does not help.

update 08/11/2022 After some custom code updates, the logzio flusher correctly flushes the loggers and the increased memory consumption between calls is not an issue any more. This solves my problem, caused by logzio logger integration

m3ck0 avatar Nov 04 '22 06:11 m3ck0

I'm experiencing this very same issue, and I use the standard logging library.

babaMar avatar Apr 05 '23 09:04 babaMar

It is not an AWS Lambda function. You have to be very gentle on it.

solomem avatar Jun 23 '23 12:06 solomem