docker-gitlab
docker-gitlab copied to clipboard
Lots of "INFO reaped unknown pid " in log
I just upgraded my docker-gitlab to version 12.5.2
. And I found a lot of INFO reaped unknown pid <pid-number>
messages in the log of gitlab.
2019-12-25 03:18:24,637 INFO reaped unknown pid 44831
2019-12-25 03:18:27,127 INFO reaped unknown pid 44863
2019-12-25 03:18:29,721 INFO reaped unknown pid 44876
2019-12-25 03:18:30,251 INFO reaped unknown pid 44889
2019-12-25 03:18:32,941 INFO reaped unknown pid 44902
...
The messages are found by running docker logs -f <docker-gitlab-container-id>
.
Does anyone know if this is a normal phenomenon or not?
Same story for 12.5.5
Had them before, now on custom 12.7.0 image, not anymore.
ARG VERSION=12.7.0
ENV GITLAB_VERSION=${VERSION} \
RUBY_VERSION=2.6 \
GOLANG_VERSION=1.12.14 \
GITLAB_SHELL_VERSION=11.0.0 \
GITLAB_WORKHORSE_VERSION=8.19.0 \
GITLAB_PAGES_VERSION=1.14.0 \
GITALY_SERVER_VERSION=1.83.0 \
GITLAB_USER="git" \
GITLAB_HOME="/home/git" \
GITLAB_LOG_DIR="/var/log/gitlab" \
GITLAB_CACHE_DIR="/etc/docker-gitlab" \
RAILS_ENV=production \
NODE_ENV=production
i upgrade to 12.7.6 or 12.7.7, there's a lot of INFO reaped unknown pid xxx
in logs, and gitlab stop responding constantly, nginx return 502 or 504
Every time i try to open a project home page, INFO reaped unknown pid xxx
appears, the page request will be completed in very long time, and i found Completed 200 OK in 40171ms (Views: 40109.4ms | ActiveRecord: 23.1ms | Elasticsearch: 0.0ms)
in production.log
After upgrading to 12.8.8, the problem disappears.
@windtail no, it's not. I upgraded from 12.5.5 to 12.8.8 like 4 days ago and I still have those messages:
gitlab_1 | 2020-04-05 01:06:22,491 INFO reaped unknown pid 4158
gitlab_1 | 2020-04-05 01:08:57,248 INFO reaped unknown pid 4181
gitlab_1 | 2020-04-05 01:11:29,177 INFO reaped unknown pid 5402
gitlab_1 | 2020-04-05 01:14:05,246 INFO reaped unknown pid 7071
gitlab_1 | 2020-04-05 01:16:43,281 INFO reaped unknown pid 7108
I checked the image (pulled it again), so I have the latest 12.8.8 tag image. It must be something else here.
@Scukerman I just find out, the problem is still there, but opening project home page is mush faster than 12.7.7
Same issue for 12.9.2
This problem seems to be related to ssh. since i am facing this im unable to pull or push via ssh protocoll. first seen with jump from 11.11.0 to 12.6.4 and still happening on 12.9.5
In my case it turned out to be the described problem with onedrive here.
- i completely generated a new rsa-key using 4096 bits.
- changed the location where i store the keys (pub and priv).
- changed my git client and putty to use the new key.
after this everything worked.
I've just upgraded (in steps) from 12.9 to 13.5 and I also have a lot of these. Every couple minutes... Running PostgreSQL 11 and Redis 6, the problem is still there :-/
~~The other problem which I have is that pushes are not updating the merge requests. Some asynchronous jobs seem not to work well~~ I had updated Postgres but not Redis, which is required to be >4
In my postgresql logs, I found this which may give a hint:
duplicate key value violates unique constraint "namespace_aggregation_schedules_pkey"
-> Well this happened only once, probably not a game changer.
Looks like sshd keeps restarting
attach shell to your container. then
cd /var/log/gitlab/supervisor && tail -f sshd.log
you will see "Invalid user" trying to login through ssh.
image: sameersbn/gitlab:12.7.0
some issue
We also have the same issue with the image build from sources. What I found is that the processes killed by supervisord are like the following:
root 8873 1 0 Aug23 ? 06:53:57 sshd: /usr/sbin/sshd -D -E /var/log/gitlab/supervisor/sshd.log [listener] 2 of 30-100 startups
root 2094606 8873 1 13:16 ? 00:00:00 sshd: git [priv]
git 2094627 2094606 0 13:16 ? 00:00:00 sshd: git@notty
git 2094697 2094627 0 13:16 ? 00:00:00 /home/git/gitlab-shell/bin/gitlab-shell key-27
2022-10-28 13:31:37,181 INFO reaped unknown pid 2094697
But no one is reporting issue related to accessing git via ssh...
It just fill the docker logs.
I managed to reproduce with the following steps:
- Get the latest docker-compose.yml
- export export GITLAB_SECRETS_DB_KEY_BASE=
- start
docker-compose up
- Connect on localhost:10080
- Change the root password
- login as root with the new password
- add your ssh public key
- clone a repository using the ssh url (you can use the default empty gitlab-instance-xxxxx/Monitoring.git)
- in the repository directory, do some other git command (git fetch)
result: the gitlab docker container is logging the following:
gitlab_1 | 2022-11-04 15:08:30,695 INFO reaped unknown pid 1365
gitlab_1 | 2022-11-04 15:23:45,595 INFO reaped unknown pid 1425
gitlab_1 | 2022-11-04 15:23:50,113 INFO reaped unknown pid 1442
and here is the /var/log/gitlab/supervisor/sshd.log
Server listening on 0.0.0.0 port 22.
Server listening on :: port 22.
Received signal 15; terminating.
Server listening on 0.0.0.0 port 22.
Server listening on :: port 22.
Connection from 172.23.0.1 port 57880 on 172.23.0.4 port 22 rdomain ""
Failed publickey for git from 172.23.0.1 port 57880 ssh2: RSA SHA256:5D5xRi6zQJhiYIsANqPvrxFU011FnTlIBCzPsLErXtQ
Failed publickey for git from 172.23.0.1 port 57880 ssh2: ED25519 SHA256:wmziM4nVwqsZ7W+xgxUzphhDqAV2iHA1jcPxRX5JMcc
Accepted key RSA SHA256:oEm0537YkIQqAihSWGqYfe24n7WJgFYExb9vBEMFF9c found at /home/git/.ssh/authorized_keys:1
Postponed publickey for git from 172.23.0.1 port 57880 ssh2 [preauth]
Accepted key RSA SHA256:oEm0537YkIQqAihSWGqYfe24n7WJgFYExb9vBEMFF9c found at /home/git/.ssh/authorized_keys:1
Accepted publickey for git from 172.23.0.1 port 57880 ssh2: RSA SHA256:oEm0537YkIQqAihSWGqYfe24n7WJgFYExb9vBEMFF9c
User child is on pid 1365
Starting session: forced-command (key-option) '/home/git/gitlab-shell/bin/gitlab-shell key-1' for git from 172.23.0.1 port 57880 id 0
Close session: user git from 172.23.0.1 port 57880 id 0
Received disconnect from 172.23.0.1 port 57880:11: disconnected by user
Disconnected from user git 172.23.0.1 port 57880
Connection from 172.23.0.1 port 49892 on 172.23.0.4 port 22 rdomain ""
On a productive instance, it generates a lot of such INFO messages and flood the logs
Notes: For each git command using ssh, it generates such a log entry. (git fetch, git pull, etc.)
According to this comment https://github.com/Supervisor/supervisor/issues/840#issuecomment-256521004 supervisord did not kill the child process, but just noticed it vanished...
So we would need a directive to tell supervisord not to log such INFO... Is changing the log level a solution then ?
same issue appeared after upgrading to 15.9.3
same issue appeared after upgrading to 15.9.3
i also have the logs
I tried to fix the issue with reference to docker-library/rabbitmq#453 and tini, see my changes on vizee/docker-gitlab@2bd005e. It looks like it's working well, but I'm not sure about any potential impacts.
A simple way to try tini:
FROM sameersbn/gitlab:15.10.1
RUN set -ex \
&& apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y tini
ENTRYPOINT ["/usr/bin/tini", "--", "/sbin/entrypoint.sh"]
CMD ["app:start"]
I have upgraded to 13.12.4 and tried the @vizee solution, still having the same logs when I try to make a pull or a push:
INFO reaped unknown pid
And
Permission denied (publickey)
Any solution for this?
The same problem here and it leads to a HTTP-500 page. A downgrade is not possible (currently running v16.5.1) in a swarm.