uno
uno copied to clipboard
XamlFileGenerator.BuildComplexPropertyValue is large contributor to GC pause time in Visual Studio (5767 MB/sec of allocations at its worst)
Current behavior
GC pause time is the largest contributor to Visual Studio's unresponsiveness in the wild. We introduced new GC pause analysis that captures live traces on customer machines when GC pause time is high and assigns blame to large contributors of this.
This analysis has flagged XamlFileGenerator.BuildComplexPropertyValue as one of those contributors. Over the past 30 days, we have seen the following stats over 16.10, 16.11 and 17.0. These allocations occur almost all in Gen0.
| Blame | # Traces | MB Allocated/Sec (Avg) | MB Allocated/Sec (Max) |
|---|---|---|---|
| uno.ui.sourcegenerators.dll!Uno.UI.SourceGenerators.XamlGenerator.XamlFileGenerator.BuildComplexPropertyValue | 48 | 777.85 | 5767.0 |
Attached is a redacted view of the allocations from BuildComplexPropertyValue in one of the traces we captured. You can use PerfView to open it.
Trace: BuildComplexPropertyValue.perfView.xml.zip
Expected behavior
Reduce Gen0/short lived allocations using the trace as guidance.
How to reproduce it (as minimally and precisely as possible)
This is from live telemetry, we have no repro.
Workaround
No response
Works on UWP/WinUI
No response
Environment
No response
NuGet package version(s)
No response
Affected platforms
No response
IDE
Visual Studio 2022, Visual Studio 2019
IDE version
16.10, 16.11, 17.0
Relevant plugins
No response
Anything else we need to know?
No response
Thanks @davkean for the report. We'll take a look.
We're still wondering why this may happen this often and may be related to the fact that generators are not always cancelled on editor keystrokes (https://developercommunity.visualstudio.com/t/Intermediate-C-source-generators-are-no/1520754).
@jeromelaban https://developercommunity.visualstudio.com/t/Intermediate-C-source-generators-are-no/1520754 is currently closed with action against Uno. If that's not the case, we should probably get that reopened to resolve this. As I see more usage of Uno, we're starting to see it show up more and more in Visual Studio performance telemetry.
To be clear, this analysis is only currently blaming devenv usage. While in the future we'll start looking at satellite processes, the above analysis does not include the service hub processes.
I agree, there's an issue in Uno for sure as the generator is doing a lot of work during design-time updates. We'll make adjustments to avoid this (and already have to ensure that cancellation happens properly).
Moving this type of work in external processes can't happen soon enough for sure :)
We still have to address https://github.com/unoplatform/uno/issues/7506 by reducing the work we're doing during design time builds, though cancellation support has improved in the 4.0 release so this should reduce pressure a bit.
I think the top contributor to the allocations here is https://github.com/dotnet/roslyn/issues/64142
In Uno, GetTypeByMetadataName is called a lot, and with types that don't exist (part of this is by-design), and in Roslyn, the fail-path for GetTypeByMetadataName allocates unnecessarily.
I'm looking into reducing GetTypeByMetadataName calls in Uno (specifically, those with invalid fully qualified names).
@davkean We've made some great improvements with our generators to have them run less often, consume less memory and CPU, but we're still guessing about the way it happens in the Roslyn processes maintained by VS.
Would you know of some documentation about how to diagnose memory issues for generators in the context of these processes? Thanks!
Since Roslyn fixed the unnecessary allocations (https://github.com/dotnet/roslyn/pull/65046 fix is 17.5 P2), and we reduced our number of calls to GetTypeByMetadataName, I don't think the trace in the issue can be used for further optimizations.
Yes, we might still have performance issues in the generator, but at least things that I could clearly see in the trace above should be fixed now.
I don't see further actions to be taken here without getting a new trace. @davkean Would you be able to help with that? Thanks!
Feel free to close it out - I'll open another bug with more traces if this continues to appear in our GC pause data.
Thanks @davkean. More traces will be very appreciated.