Investigate memory leak
In ~12 days of uptime, SAMonitor ballooned up to consuming 1Gb of system memory. This is not a crazy amount, and RAM is only cheap, but considering it's previously ran for over a month using below 400Mb this must've been introduced by a somewhat recent change.
I imagine I can find the reason by reviewing my last 15 commits or so. Can't do that now, so writing this as a reminder to self.
Issue has not reproduced since. Assuming a fluke.
Over the course of the last 46 days of continuous SAMonitor runtime, it ballooned up to ~2Gb of system memory usage.
It was not an issue but it's well above it's usual 200Mb. I must investigate if this is .NET Runtime resource caching or a genuine leak. I'll have to implement ways to track allocations.
Timer assemblies seem to be sticking around after execution.
After ~15 hours in production, GC Heap size seems sane, but working set has grown to ~250Mb (still reasonable)
After 4 days of uptime, GC Heap remains similar but the working set has ballooned up to 600Mb. Will investigate.
Addressing this again:
.NET provides a way to limit maximum WorkingSet allocation (or rather to report this maximum to the operating system). However, Linux, the host operating system for SAMonitor production does not allow this.
It seems to me the next best way to avoid working set ballooning is by decreasing allocations, so I've introduced a connection pool for MySQL, as otherwise every operation spawned a new connection and I'm not sure how .NET went along disposing of those. Furthermore there's a chance it improves performance some by skipping unnecesary handshakes with mariadb.
The connectionpool update definitely did something, since as expected, less allocations means less working set growth. (after around 4-5 days, we're well below 600mb)
This still seems excessive, however, so I'll continue checking about.
Largely solved.