error-pages
error-pages copied to clipboard
CPU spikes
Is there an existing issue for this?
- [X] I have searched the existing issues
- [X] And it has nothing to do with Traefik
Describe the bug
I noticed cpu spikes on one of my VM machines, and figured with: docker stats --format "table {{.Name}}t{{.CPUPerc}}t{{.MemUsage}}"
that like every 5-10 seconds error-pages the cpu usage spikes to 8-16% while my other containers chill at ~1%.
traefikt0.00%t24.01MiB / 1.929GiB
error-pagest17.05%t2.922MiB / 1.929GiB
adguardhomet0.02%t45.77MiB / 1.929GiB
portainert0.00%t8.27MiB / 1.929GiB
dozzlet0.00%t13.95MiB / 1.929GiB
sp_watchtowert0.02%t2.418MiB / 1.929GiB
apachet0.01%t5.945MiB / 1.929GiB
sp_traefikt0.01%t9.016MiB / 1.929GiB
autheliat0.00%t21.08MiB / 1.929GiB
sp_portainert0.01%t2.32MiB / 1.929GiB
whoamit0.00%t1MiB / 1.929GiB
traefik-bouncert0.00%t18.5MiB / 1.929GiB
honestly no idea why, I closed all http/s connections to the server and still had this issue.
Steps to reproduce
No response
Configuration files
error-pages:
<<: *common-keys-alwaysup # See EXTENSION FIELDS at the top
image: tarampampam/error-pages:2.20.0 # Using the latest tag is highly discouraged. Please, use tags in X.Y.Z format
container_name: error-pages
command:
- "--log-level=error"
- "serve"
networks:
- traefik
volumes:
- ${APPDATA_DIR:?err}/error-pages/mytemplates:/opt/templates:ro # my templates
environment:
# TEMPLATE_PATH: /mytemplates/matrix.html
TEMPLATE_NAME: matrix # set the error pages template
SHOW_DETAILS: true
labels:
traefik.enable: true
traefik.http.services.error-pages.loadbalancer.server.port: 8080
com.centurylinklabs.watchtower.monitor-only: true
Relevant log output
No response
Anything else?
maybe you can tell me, what could cause this
thanks a lot and I really like this project
Oh, thanks for your issue! How can I reproduce this locally?
Oh, thanks for your issue! How can I reproduce this locally?
well I don't know, I even reduced my containers to a bare minimum and still notice the same. doesn't it spike on your server? Atm, I excluded it of my core containers running on every hosts, because I got few hosts having poor performance, even tho I would like to have it everywhere.
Yea, from time to time I test the performance and resource usage, and all looks ok. In addition, I have tested that right now:
Spikes that you can see (1..3%% of one core) is ok for go-based application, it looks like a GC is working, and templates in-memory cache cleaning.
Should I dig deeper?
does GC kick in like every 5 seconds? I think its a bit strange, that it has spikes, while 0 connections are active, like apache is 100% idle, traefik has a little activity, but its nothing comparable to error-pages spiking to 8-16% every ~5 seconds.
Am Fr., 24. Feb. 2023 um 08:37 Uhr schrieb Paramtamtam < @.***>:
Yea, from time to time I test the performance and resource usage, and all looks ok. In addition, I have tested that right now:
[image: Screenshot at 2023-02-24 11-30-30] https://user-images.githubusercontent.com/7326800/221118700-59f21697-9432-445c-8fc4-627725b4dcb1.png
Spikes that you can see is ok for go-based application, it looks like a GC working and templates in-memory cache cleaning https://github.com/tarampampam/error-pages/blob/36673a49a4a2a0e6548046fa814fbc3c1eeef2b8/internal/tpl/render.go#L95-L106 .
Should I dig deeper?
— Reply to this email directly, view it on GitHub https://github.com/tarampampam/error-pages/issues/175#issuecomment-1442960442, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOKTZV7OOELBURWQLBVWXXDWZBQL3ANCNFSM6AAAAAAU3UI4XU . You are receiving this because you authored the thread.Message ID: @.***>
Okay, I'll research this case deeper, stay tuned
Awesome. Thanks :)
Am Fr., 24. Feb. 2023 um 11:25 Uhr schrieb Paramtamtam < @.***>:
Okay, I'll research this case deeper, stay tuned
— Reply to this email directly, view it on GitHub https://github.com/tarampampam/error-pages/issues/175#issuecomment-1443428811, or unsubscribe https://github.com/notifications/unsubscribe-auth/AOKTZV34Y5GCB66NYJ64Y2LWZCECNANCNFSM6AAAAAAU3UI4XU . You are receiving this because you authored the thread.Message ID: @.***>
Hello there again! After numerous updates to Go and various tests with other Go-based apps, I've come to this conclusion - the spikes appear to be due to GC runs, and there's no legitimate way for us to disable them (disabling GC isn't an option for us). Unfortunately, I'm closing this issue as there's nothing more I can do about it.