logspout icon indicating copy to clipboard operation
logspout copied to clipboard

High Memory Usage/Memory not freed

Open blaines opened this issue 7 years ago • 11 comments

I'm running many instances of the container on many hosts, but on some hosts the memory begins to grow without being freed. Are there any recommendations to resolve this? Thanks!

100% on the graph is 128mb

screen shot 2016-09-15 at 1 00 45 pm

docker inspect image:

        "Id": "sha256:6c7afda380b21c90161d54a1f6d407f17d015efc9fc0b7b8cab7c3fdb6d96513",
        "RepoTags": [
            "gliderlabs/logspout:latest"
        ],
        "RepoDigests": [],
        "Parent": "",
        "Comment": "",
        "Created": "2016-05-23T21:36:58.770911598Z",
        "Container": "79e44e12ada6373446e23ed6dec4e807533339c3346c2035baa2c37f0053ac75",

blaines avatar Sep 15 '16 20:09 blaines

We have the same issue, memory is constantly growing and it is not freed.

Halama avatar Jan 19 '17 09:01 Halama

any news about this issue? We are using syslog+tls://logs.papertrailapp.com:*** There are stats for our logging container after 10 days:

CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O               BLOCK I/O           PIDS
60b1f70de474        0.14%               125.1 MiB / 128 MiB   97.72%              74.53 MB / 442.7 MB   8.249 MB / 0 B      0

Docker inspect:

docker inspect 60b1f70de474
[
    {
        "Created": "2017-03-02T11:42:13.240664727Z",
        "Path": "/bin/logspout",
        "Args": [
            "syslog+tls://logs.papertrailapp.com:***"
        ],
        "Image": "sha256:6c7afda380b21c90161d54a1f6d407f17d015efc9fc0b7b8cab7c3fdb6d96513"
    }

Halama avatar Mar 13 '17 10:03 Halama

@Halama no, there is not "any news" about this issue. Without knowing the differences between hosts that are hogging memory, all I can guess is that you are generating more logs than can be shipped by logspout.

josegonzalez avatar Mar 13 '17 11:03 josegonzalez

thanks @josegonzalez We are experiencing same behaviour on all of our hosts running same containers. These are stats for today:

CONTAINER           CPU %               MEM USAGE / LIMIT   MEM %               NET I/O              BLOCK I/O           PIDS
60b1f70de474        0.12%               128 MiB / 128 MiB   99.98%              79.5 MB / 470.9 MB   65.49 GB / 0 B      0

Ve are running on Amazon ECS-Optimized Amazon Linux AMI 2016.09.e

docker version
Client:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   7392c3b/1.12.6
 Built:        Fri Jan  6 22:16:21 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   7392c3b/1.12.6
 Built:        Fri Jan  6 22:16:21 2017
 OS/Arch:      linux/amd64

Please let me know what should we provide to be able to debug the issue. thanks Martin

Halama avatar Mar 14 '17 09:03 Halama

Does anyone associated with this issue have any data to show that it's logspout that's using this memory? I use logspout on multiple AWS deployments and am not seeing this.

I think we just need more info here

michaelshobbs avatar Mar 30 '17 23:03 michaelshobbs

I haven't seen this .. using logspout:master in AWS+rancher, via syslog+tcp, and it doesnt seem to take more than 11MB or so of memory at any given time.

I'd ask what's different about those specific hosts compared to the ones that dont have that problem?

gaieges avatar Mar 30 '17 23:03 gaieges

Hi, we are running now commit https://github.com/gliderlabs/logspout/commit/26966592a2872c61c741934c91759d1da90ab545 And it looks like garbage collection started working and containers are stable and logging.

System: Amazon ECS-Optimized Amazon Linux AMI 2016.09.g

Docker version:

Client:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   7392c3b/1.12.6
 Built:        Tue Mar  7 20:34:04 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   7392c3b/1.12.6
 Built:        Tue Mar  7 20:34:04 2017
 OS/Arch:      linux/amd64

Docker command:

docker run --restart=always --memory 128m --memory-swap 128m \
             -v=/var/run/docker.sock:/var/run/docker.sock \
             SYSLOG_HOSTNAME=$(hostname) -e INACTIVITY_TIMEOUT=1m \
             147946154733.dkr.ecr.us-east-1.amazonaws.com/keboola/logspout:latest \
             syslog://logs.papertrailapp.com:XXX

Metrics collected by Datadog for two logspout containers: image

Halama avatar Mar 31 '17 07:03 Halama

I have noticed now that the containers were restarted, so there wasn't any garbage collection but --restart flag applied: image

Halama avatar Mar 31 '17 07:03 Halama

Just to add a +1 - we see this problem regularly on a random selection of our ECS cluster fleet. We can't find any commonality to the instances affected.

We run two logspout containers per node - one with a cloudwatch logs output plugin, one with logentries output. Both container types have the problem.

Client:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:20:01 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.6.4
 Git commit:   78d1802
 Built:        Tue Jan 10 20:20:01 2017
 OS/Arch:      linux/amd64

version is # logspout v3.2-dev-custom by gliderlabs from 2016-11-14 - memory set to 256Mb.

The version we had deployed prior to November had much more frequent OOMs, but the newer one still has it.

james-masson avatar Apr 05 '17 14:04 james-masson

Looks like we are experiencing this issue as well. We are running logspout as a global service within a Docker Swarm, and see memory use growing to maximum before the containers die and restart themselves. Version is gliderlabs/logspout:v3.2.4 and output is syslog://logstash

Here's the pattern in our Grafana metrics:

screen shot 2018-05-14 at 1 08 41 pm

We run the same thing in a staging and a production environment and they both display the same behavior.

Wondering if anyone has any more information on this or tips on how to dig a little deeper for a root cause?

dviator avatar May 14 '18 18:05 dviator

I don't regularly see this issue but I've just found one at 2GB of memory usage for the logspout process.

This is the latest contents of the console:

07/06/2022 11:37:282022/06/07 10:37:27 logstash: could not write:write udp y.y.y.y:56990->x.x.x.x:5007: write: connection refused
07/06/2022 11:38:072022/06/07 10:38:07 # logspout v3.2.6-custom by gliderlabs
07/06/2022 11:38:072022/06/07 10:38:07 # adapters: tcp tls udp logstash multiline raw syslog
07/06/2022 11:38:072022/06/07 10:38:07 # options :
07/06/2022 11:38:072022/06/07 10:38:07 persist:/mnt/routes
07/06/2022 11:38:072022/06/07 10:38:07 # jobs    : pump routes http[routes,health,logs]:80
07/06/2022 11:38:072022/06/07 10:38:07 # routes  :
07/06/2022 11:38:07#   ADAPTER		ADDRESS				CONTAINERS	SOURCES	OPTIONS
07/06/2022 11:38:07#   multiline+logstash	my.host.name.com:5007				map[]
07/06/2022 11:38:222022/06/07 10:38:22 logstash: could not write:write udp y.y.y.y:37443->x.x.x.x:5007: write: connection refused
07/06/2022 11:40:032022/06/07 10:40:02 # logspout v3.2.6-custom by gliderlabs
07/06/2022 11:40:032022/06/07 10:40:02 # adapters: multiline raw syslog tcp tls udp logstash
07/06/2022 11:40:032022/06/07 10:40:02 # options :
07/06/2022 11:40:032022/06/07 10:40:02 persist:/mnt/routes
07/06/2022 11:40:032022/06/07 10:40:03 # jobs    : http[routes,health,logs]:80 pump routes
07/06/2022 11:40:032022/06/07 10:40:03 # routes  :
07/06/2022 11:40:03#   ADAPTER		ADDRESS				CONTAINERS	SOURCES	OPTIONS
07/06/2022 11:40:03#   multiline+logstash	my.host.name.com:5007				map[]
07/06/2022 11:40:292022/06/07 10:40:29 logstash: could not write:write udp y.y.y.y:44397->x.x.x.x:5007: write: connection refused
07/06/2022 11:40:502022/06/07 10:40:50 # logspout v3.2.6-custom by gliderlabs
07/06/2022 11:40:502022/06/07 10:40:50 # adapters: syslog tcp tls udp logstash multiline raw
07/06/2022 11:40:502022/06/07 10:40:50 # options :
07/06/2022 11:40:502022/06/07 10:40:50 persist:/mnt/routes
07/06/2022 11:40:502022/06/07 10:40:50 # jobs    : pump routes http[health,logs,routes]:80
07/06/2022 11:40:502022/06/07 10:40:50 # routes  :
07/06/2022 11:40:50#   ADAPTER		ADDRESS				CONTAINERS	SOURCES	OPTIONS
07/06/2022 11:40:50#   multiline+logstash	my.host.name.com:5007				map[]
07/06/2022 11:41:032022/06/07 10:41:03 logstash: could not write:write udp y.y.y.y:33760->x.x.x.x:5007: write: connection refused
07/06/2022 11:41:18#   ADAPTER		ADDRESS				CONTAINERS	SOURCES	OPTIONS
07/06/2022 11:41:18#   multiline+logstash	my.host.name.com:5007				map[]
07/06/2022 11:41:182022/06/07 10:41:17 # logspout v3.2.6-custom by gliderlabs
07/06/2022 11:41:182022/06/07 10:41:17 # adapters: logstash multiline raw syslog tcp tls udp
07/06/2022 11:41:182022/06/07 10:41:17 # options :
07/06/2022 11:41:182022/06/07 10:41:17 persist:/mnt/routes
07/06/2022 11:41:182022/06/07 10:41:17 # jobs    : http[health,logs,routes]:80 pump routes
07/06/2022 11:41:182022/06/07 10:41:17 # routes  :
07/06/2022 11:41:342022/06/07 10:41:33 logstash: could not write:write udp y.y.y.y:51667->x.x.x.x:5007: write: connection refused
07/06/2022 11:41:492022/06/07 10:41:49 # logspout v3.2.6-custom by gliderlabs
07/06/2022 11:41:492022/06/07 10:41:49 # adapters: tls udp logstash multiline raw syslog tcp
07/06/2022 11:41:492022/06/07 10:41:49 # options :
07/06/2022 11:41:492022/06/07 10:41:49 persist:/mnt/routes
07/06/2022 11:41:492022/06/07 10:41:49 # jobs    : routes http[health,logs,routes]:80 pump
07/06/2022 11:41:492022/06/07 10:41:49 # routes  :
07/06/2022 11:41:49#   ADAPTER		ADDRESS				CONTAINERS	SOURCES	OPTIONS
07/06/2022 11:41:49#   multiline+logstash	my.host.name.com:5007				map[]
07/06/2022 11:41:572022/06/07 10:41:57 logstash: could not write:write udp y.y.y.y:35555->x.x.x.x:5007: write: connection refused
07/06/2022 11:42:17#   ADAPTER		ADDRESS				CONTAINERS	SOURCES	OPTIONS
07/06/2022 11:42:17#   multiline+logstash	my.host.name.com:5007				map[]
07/06/2022 11:42:182022/06/07 10:42:17 # logspout v3.2.6-custom by gliderlabs
07/06/2022 11:42:182022/06/07 10:42:17 # adapters: logstash multiline raw syslog tcp tls udp
07/06/2022 11:42:182022/06/07 10:42:17 # options :
07/06/2022 11:42:182022/06/07 10:42:17 persist:/mnt/routes
07/06/2022 11:42:182022/06/07 10:42:17 # jobs    : routes http[health,logs,routes]:80 pump
07/06/2022 11:42:182022/06/07 10:42:17 # routes  :

mjaggard avatar Oct 29 '22 07:10 mjaggard