ekuiper icon indicating copy to clipboard operation
ekuiper copied to clipboard

high memory usage by ekuiper v2.2.1

Open ankit-4129 opened this issue 2 months ago • 5 comments

Environment:

  • eKuiper version (e.g. 1.3.0): 2.2.1
  • Hardware configuration (e.g. lscpu):
Architecture:                x86_64
  CPU op-mode(s):            32-bit, 64-bit
  Address sizes:             39 bits physical, 48 bits virtual
  Byte Order:                Little Endian
CPU(s):                      4
  On-line CPU(s) list:       0-3
Vendor ID:                   GenuineIntel
  Model name:                Intel Atom(R) x6425E Processor @ 2.00GHz
    CPU family:              6
    Model:                   150
    Thread(s) per core:      1
    Core(s) per socket:      4
    Socket(s):               1
    Stepping:                1
    CPU(s) scaling MHz:      90%
    CPU max MHz:             3000.0000
    CPU min MHz:             800.0000
  • OS (e.g. cat /etc/os-release):
NAME="Oracle Linux Server"
VERSION="9.6"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="9.6"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Oracle Linux Server 9.6"
  • Others:

What happened and what you expected to happen: High memory usage by ekuiper with about 6000 rules.

2025-10-30T07:08:38.425199590+00:00 stdout F time="2025-10-30T07:08:38Z" level=info msg="register rest endpoint for component prometheus" file="server/rest.go:235"
2025-10-30T07:08:38.425297557+00:00 stdout F time="2025-10-30T07:08:38Z" level=info msg="start service pprof" file="server/server.go:278"
2025-10-30T07:08:38.425351159+00:00 stdout F time="2025-10-30T07:08:38Z" level=info msg="start service prometheus" file="server/server.go:278"
2025-10-30T07:08:38.425398513+00:00 stdout F time="2025-10-30T07:08:38Z" level=info msg="using ListenMode 'http'" file="server/server.go:76"
2025-10-30T07:08:38.425454337+00:00 stdout F time="2025-10-30T07:08:38Z" level=info msg="Run pprof in 127.0.0.1:6060" file="server/pprof_init.go:45"
2025-10-30T07:08:38.425616008+00:00 stdout F time="2025-10-30T07:08:38Z" level=info msg="Serving kuiper (version - ) on port 20498, and restful api on http://0.0.0.0:59720." file="server/server.go:291"
2025-10-30T07:08:38.425616008+00:00 stdout F Serving kuiper (version - ) on port 20498, and restful api on http://0.0.0.0:59720.
2025-10-30T07:08:38.428911074+00:00 stderr F panic: listen tcp 0.0.0.0:59720: bind: address already in use
2025-10-30T07:08:38.428911074+00:00 stderr F 
2025-10-30T07:08:38.428911074+00:00 stderr F goroutine 163308 [running]:
2025-10-30T07:08:38.428911074+00:00 stderr F github.com/lf-edge/ekuiper/v2/internal/server.StartUp.func1()
2025-10-30T07:08:38.429024986+00:00 stderr F 	github.com/lf-edge/ekuiper/v2/internal/server/server.go:264 +0x12a
2025-10-30T07:08:38.429024986+00:00 stderr F created by github.com/lf-edge/ekuiper/v2/internal/server.StartUp in goroutine 1
2025-10-30T07:08:38.429024986+00:00 stderr F 	github.com/lf-edge/ekuiper/v2/internal/server/server.go:260 +0x143e

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

ankit-4129 avatar Oct 30 '25 12:10 ankit-4129

That's a lot of rules. What is the memory and your expectation? Could you run curl -o heap.pprof "http://localhost:6060/debug/pprof/heap" to fetch the memory heap profile for analyzing. Thanks.

ngjaying avatar Oct 31 '25 00:10 ngjaying

Our containers have a limit of ~800MB, we were expecting the memory to be under that limit.

pprof overview: Image

Here is the file attached: curl_pprof_output_31_oct.zip

gaurangomar avatar Oct 31 '25 07:10 gaurangomar

Each rule requires a specific memory allocation for its internal structures and state. The current consumption is a direct result of the high rule count. Maybe you can try to reduce the rule count.

ngjaying avatar Oct 31 '25 08:10 ngjaying

The most memory usage cost by for each rule may be caused by bufferLength according to your heap profile.

You can lower each rule's bufferLength, eg: 256. The default value is 1024. @gaurangomar

Yisaer avatar Nov 03 '25 06:11 Yisaer

@ngjaying We have increased the container limit to 1.6 GB, for the same number for rules and streams, still the memory is increasing, Currently it's consuming 1.55 GB.

Can you help me understand if we can make some changes, or how much memory it should consume for 5800 rules.

gaurangomar avatar Nov 11 '25 06:11 gaurangomar