dashy
dashy copied to clipboard
[BUG] Unraid container crashes after running for a short period, won't start back up.
Environment
Self-Hosted (Docker)
Version
2.0.8
Describe the problem
After the docker has been running for a few minutes, it crashes. On relaunch, the only contents in the log is:
- Building for production...
WARN A new version of sass-loader is available. Please upgrade for best experience.
error Command failed with signal "SIGSEGV".
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
ERROR: "build-watch" exited with 1.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Relaunching will not immediately work. I have to wait a few minutes before relaunching for it to launch without crashing. Once it does launch without crashing, this is the log that follows: https://pastebin.com/G3HaVNPX
Additional info
No response
Please tick the boxes
- You are using a supported version of Dashy (check the first two digits of the version number)
- You've checked that this issue hasn't already been raised
- You've checked the docs and troubleshooting guide
- You agree to the code of conduct
Fast, reliable, and secure dependency management.
Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.
Thanks for the ticket, and sorry about that. There was another similar issue a few days ago (#637), which was a mem leak, and I couldn't recreate. But after seeing yours it seems this is indeed a memory leak, and seems to only happen on Unraid servers. I believe it may be caused by the Alpine Node base image, but have yet to be able to recreate it. Will keep you posted.
I don't know if this helps or not, but I've noticed that when initially starting the container the memory usage jumps up to 1GB, then eventually mellows out to about 250 MB. When making any change to the config file, it jumps to over 1 GB again.
At the time of testing this, I am running Unraid version 6.10.0-rc5, by the way. That wasn't noted in my original comment. I'm updating to rc8 to see if that makes any difference.
To help provide the info, the version of Unraid that I'm using is 6.9.2.
I wish my container would mellow out at around 250MB lol. Mine eventually consumes as much memory as it possibly can and then shuts down the container.
So I may have some useful information.
My containers' tags are pretty much always just set to "latest" so that they'll get whatever newest stable release is out.
I set the container's tag back a couple to see when the issue would start happening. It looks like 2.0.6 is the last version that works correctly with no memory leak. I've had the container on for about 20 minutes now and it's still just saying right at around 330MB.
As soon as I set the tag to "2.0.7" the memory leak starts happening again.
I'm no developer, but I have an idea of what it could be. This could be way off as I have no idea what I'm talking about but I'm just throwing out a theory lol.
For Dashy, I use the little green/red dot tool to check if a service is online or not. I've never been able to get Home Assistant, Radarr, and Sonarr set up with it so I disabled the online checker for those services. For reference, here is what my dashboard looks like in 2.0.6:
https://i.imgur.com/4n0XwWL.png (I've red boxed the relevant parts).
Now look what it looks like in 2.0.7: https://i.imgur.com/LutpeyE.png
No changes were made to the configuration files, and yet it looks like that online checker is running for those three services. Could it be that that online checker is constantly running and failing or something causing the memory leak?
Again, just theorizing, but yeah. Thought I'd provide this info.
That's really helpful, thank you. Weird because there were no backend changes in that release, but there was a change in the upstream base image (node:16.13.2-alpine). If that was the cause, then the fix should just be to pin the Docker base image to a more stable version.
In the meantime, are you okay to stick with 2.0.6, and I will let you know here once I've got a fix ready?
That status check bug was also raised in #651, and I've got a fix ready for the next update :)
This issue has gone 6 weeks without an update. To keep the ticket open, please indicate that it is still relevant in a comment below. Otherwise it will be closed in 5 working days.
This issue was automatically closed because it has been stalled for over 6 weeks with no activity.
@jeremytodd1 @TheAlchemist606
Can one of you confirm, if this is still an issue?
Thanks!
This issue was automatically closed because it has been stalled for over 1 year with no activity.