Free memory of leader instance comes down to 10% every 1-1.5 months
**Free memory of tile38 leader goes down slowly throughout the month, eventually coming down to ~10% ** I am running tile38 with 2 followers and 1 leader, my ec2 instance goes out of memory after 1 to 1.5 months, eventually I have to restart the sentinal and tile38 which resolves the issue.
To Reproduce My system has 70K+ geofences (on leader) register with kafka callback hooks.
Expected behavior System should run without any issues, and free memory should not exhaust for leader
Logs If applicable, provide logs, panics, system messages to help explain your problem.
Operating System (please complete the following information): OS: Linux - ec2 instance Version: latest
Additional context Running with redis-sentinel and leader/follower
Grafana / prometheus query for free memory - (node_memory_MemFree_bytes {Service="TILE38"} + node_memory_Cached_bytes + node_memory_Buffers_bytes) / node_memory_MemTotal_bytes * 100
Hi,
When you restart the sentinel and tile38, does the memory return back completely to where you would expect it should be? Or does it still seem high after a restart? How many client connection does a single server have at one time? What version of Tile38? Also, is this related to the issue #751?
When I restart the sentinel and tile38, memory returns back to where I expect it should be. "connected_clients":25 "connected_slaves":2
INFO
{ "ok": true, "info": { "aof_current_rewrite_time_sec": 0, "aof_enabled": 1, "aof_last_rewrite_time_sec": 0, "aof_rewrite_in_progress": 0, "cluster_enabled": 0, "connected_clients": 25, "connected_slaves": 2, "expired_keys": 0, "redis_version": "0.0.0", "role": "master", "slave0": "ip=------,port=9851,state=online", "slave1": "ip=------,port=9851,state=online", "tile38_version": "0.0.0", "total_commands_processed": 2531398952, "total_connections_received": 22958, "total_messages_sent": 17096662, "uptime_in_seconds": 2936756, "used_cpu_sys": 143561, "used_cpu_sys_children": 0, "used_cpu_user": 529994, "used_cpu_user_children": 0, "used_memory": 938383136 }, "elapsed": "152.277µs" }
Issue is not related to the one I raised earlier, that is resolved now.
After free memory comes down, we face issues with threads taking more time to write location in database through client. Eventually blocking threads and increasing cpu. Restart resolves these issues and memory returns back to normal.