zammad-docker-compose
zammad-docker-compose copied to clipboard
Performance issue, zammad-railsserver does not free up RAM when requested to reduce
I am currently using Zammad version 5.3.1-6.
I am encountering issues regarding server downtime and response time.
I faced difficulties while attempting to upgrade Zammad to version 6, and I felt helpless during the upgrade process.
Therefore, I am seeking a solution to address this 504 problem. Could you please advise me on what steps I should take?
After running for a few hours, my server's memory usage reached 99%, despite having only 12 active agents.
The peculiar aspect is that even when there is no activity or incoming requests, the memory usage remains constant and does not decrease; instead, it continues to increase.
Mem: 125GB Disk: 1.8T
rails r "p User.joins(roles: :permissions).where(roles: { active: true }, permissions: { name: 'ticket.agent', active: true }).uniq.count"
257
rails r "p Sessions.list.count"
21
rails r "p User.count"
389808
rails r "p Overview.count"
125
rails r "p Group.count"
80
rails r 'p Delayed::Job.count'
904
Our configuration of Zammad railsserver version: '3'
services: zammad-railsserver: environment: - WEB_CONCURRENCY=8 - MAX_THREADS=8 - MIN_THREADS=2 - ZAMMAD_SESSION_JOBS_CONCURRENT=8 - ZAMMAD_WEB_CONCURRENCY=8
Please tell me the solution to fix this, I really need help from you guys
(Sorry if English is my second language so I didn't express myself correctly)
Hello @levanluu. I'm not sure if we can help you here. Can you provide some more details please? Which processes are active, how much memory do they consume, any errors in the logs?
Hi @mgruner, We have no errors in the logs
`- WEB_CONCURRENCY=8
- MAX_THREADS=8
- MIN_THREADS=2
- ZAMMAD_SESSION_JOBS_CONCURRENT=8
- ZAMMAD_WEB_CONCURRENCY=8`
We have set up so that there will be 8 processes running in parallel to process requests
As above I mentioned we are using the server with the following information
Mem: 125GB Disk: 1.8T
and zammad-railsserver always uses up memory, Is there a way to release it, when there are no incoming requests? It never frees up unless I restart it manually
Zammad 5.3 is extremely old and may hold bugs or issues that are no longer present. Upgrading to the currently current 6.2 is a must.
Also I'm sure you're talking about Elasticsearch eating 50% of your memory and not Zammad itself.
Can you try setting the web concurrency to 0, or a small value? For so few agents, I doubt that you need parallel processes next to the threading at all.
And yes, how much memory does elasticsearch take?
Hi @MrGeneration I created a ticket related to my Upgrade issue here, and I tried to upgrade it many times but failed, and my system stopped working for many days to perform this upgrade.
Can you try setting the web concurrency to 0, or a small value? For so few agents, I doubt that you need parallel processes next to the threading at all.
Yes, I tried this and my system became slow and threw 504 error continuously
And yes, how much memory does elasticsearch take?
I did set up 16GB for elasticsearch
zammad-elasticsearch: image: bitnami/elasticsearch:8.5.1 environment: - discovery.type=single-node - "ES_JAVA_OPTS=-Xms16g -Xmx16g" restart: ${RESTART} volumes: - elasticsearch-data:/bitnami/elasticsearch/data
zammad-railsserver_1 This is the server when I just restarted it a few hours ago. After a while, zammad-railsserver_1 will take up all the server's memory. and zammad-elasticsearch it does not exceed 17GB
I hope you understand my problem, thank you very much for your attention to this difficulty of mine
Provide the railsservers logfile containing ERROR
, please.
Best during a time window the issue occurs.
I checked the logs and didn't actually find any error causing the container corruption issue here
The problem I'm having is that zammad-railsserver doesn't automatically free up memory when there are no incoming requests
When memory reaches the server's threshold, requests to it become unresponsive or respond slowly, throwing a 504 error.
This is server information after 1 night of operation, currently, all of our agents are inactive, and only very few requests But its memory still remains and cannot be freed up
The memory behaviour in Ruby is that memory which was once allocated is not freed again. It may only be reused for future allocation requests of the same process. That is normal behaviour.
However, memory usage seems to be unusually high on your railsserver. Here I could only recommend reducing the web concurrency (and thus the amount of processes) step by step to see if it helps, perhaps increasing the amount of threads (which share the same process context).
But what I also find very strange is the high load on both the rails server and the database server. Where does this come from? To analyze this, you might want to monitor the actual network traffic, or the database requests.
Do you have any code customization in your Zammad, or in your docker compose stack?
The scheduler is also at 100% CPU. This looks strange as well. How many configured ticket overviews do you have in your system?
rails r "p User.joins(roles: :permissions).where(roles: { active: true }, permissions: { name: 'ticket.agent', active: true }).uniq.count"
257
rails r "p Sessions.list.count"
21
rails r "p User.count"
389808
rails r "p Overview.count"
125
rails r "p Group.count"
80
rails r 'p Delayed::Job.count'
904
These are the parameters you need @mgruner
Do you have any code customization in your Zammad, or in your docker compose stack?
I just increased postgresql max_connection to 2000
and limit_size elasticsearch is 16GB
Other than that there are no changes
125 Overviews is a lot, and it depends on their configuration as well how costly they are. I would recommend to also reduce ZAMMAD_SESSION_JOBS_CONCURRENT. There is a chance of causing too much system load with too high values here. You could try setting it to 1 or a similar low value. The only risk here is that overviews in the browser might be slightly outdated.
This might solve the CPU issue of the scheduler, and maybe also the database load. But probably not the memory issue of the railsserver.
Any news on this @levanluu?