ApplicationInsights-dotnet icon indicating copy to clipboard operation
ApplicationInsights-dotnet copied to clipboard

ApplicationInsightsTraceListener with Autoflush setting causes way too much network-dependent flushing

Open MattGal opened this issue 6 years ago • 11 comments

This commit: https://github.com/microsoft/ApplicationInsights-dotnet-logging/commit/1a31e7b59954bb8f3f00a8855750205a3e36b709

... when combined with a System.Diagnostics.Trace configuration where Autoflush is set to true caused our service to Flush() on every Trace.Write* call, which rapidly made our Azure Cloud Service unusable.

We encountered this when upgrading from 2.4 to 2.10 version of the libraries used. The Autoflush setting ends up calling the Telemetry client flush function on every call, leading to rapid network functionality loss and/or writing too much telemetry to disk, filling said disk, and dying.

We're unblocked on the .NET Engineering side of things here, but it'd be useful to try to play nicely with this setting.

MattGal avatar Aug 23 '19 16:08 MattGal

Adding this for context: https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics.trace.autoflush?view=netframework-4.8

TimothyMothra avatar Aug 23 '19 17:08 TimothyMothra

This was a pretty unexpected change in behavior, especially since the sample of using this trace listener includes setting this value to true... it will probably break a lot of production servers (like it did ours for 5 hours while we struggled to diagnose the problem with no logs available, since it was the logging framework that was behaving strangely). I already opened an issue on the the relevant sample that caused us to set atuoFlush to true in the first place. (https://github.com/MicrosoftDocs/azure-docs/issues/37662)

ChadNedzlek avatar Aug 23 '19 18:08 ChadNedzlek

Thanks for reporting!

Yes flushing on every item causes TransmissionSender (default capacity of 3 transmissions at a time) to run out quickly, and SDK will be forced to write data to memory or disk heavily. We'll investigate more on how to have a proper fix.

cijothomas avatar Aug 24 '19 01:08 cijothomas

This issue is stale because it has been open 300 days with no activity. Remove stale label or comment or this will be closed in 7 days.

github-actions[bot] avatar Sep 18 '21 00:09 github-actions[bot]

Thanks for reporting!

Yes flushing on every item causes TransmissionSender (default capacity of 3 transmissions at a time) to run out quickly, and SDK will be forced to write data to memory or disk heavily. We'll investigate more on how to have a proper fix.

@cijothomas have you had any chance to investigate more in the past couple years?

MattGal avatar Sep 20 '21 18:09 MattGal

Thanks for reporting! Yes flushing on every item causes TransmissionSender (default capacity of 3 transmissions at a time) to run out quickly, and SDK will be forced to write data to memory or disk heavily. We'll investigate more on how to have a proper fix.

@cijothomas have you had any chance to investigate more in the past couple years?

No. the doc change was made to prevent people from accidently setting auto-flush. No investments/investigations were done in logging adapters for a long time, as most investments were in supporting ILogger based logging.

cijothomas avatar Sep 20 '21 19:09 cijothomas

This issue is stale because it has been open 300 days with no activity. Remove stale label or this will be closed in 7 days. Commenting will instruct the bot to automatically remove the label.

github-actions[bot] avatar Jul 18 '22 00:07 github-actions[bot]

Thanks for reporting! Yes flushing on every item causes TransmissionSender (default capacity of 3 transmissions at a time) to run out quickly, and SDK will be forced to write data to memory or disk heavily. We'll investigate more on how to have a proper fix.

@cijothomas have you had any chance to investigate more in the past couple years?

No. the doc change was made to prevent people from accidently setting auto-flush. No investments/investigations were done in logging adapters for a long time, as most investments were in supporting ILogger based logging.

Another year, another reply to keep this issue alive since as far as I can tell no one is claiming it's resolved.

MattGal avatar Jul 18 '22 16:07 MattGal

No work has been done in any logging adapters except ILogger for a long time.

cijothomas avatar Jul 18 '22 16:07 cijothomas

This issue is stale because it has been open 300 days with no activity. Remove stale label or this will be closed in 7 days. Commenting will instruct the bot to automatically remove the label.

github-actions[bot] avatar May 16 '23 00:05 github-actions[bot]

OK, I give up, I will let it be closed with no fix.

MattGal avatar May 16 '23 15:05 MattGal