ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below): clickhouse-1 |
Self-Hosted Version
24.4.0.dev
CPU Architecture
x86_64
Docker Version
26.0.0
Docker Compose Version
2.25.0
Steps to Reproduce
- Clone the project from the Github: https://github.com/getsentry
- Do
./install.shin the GitHub folder - Once the containers are up check the
clickhousecontainer logs
Expected Result
The clickhouse container should work without throwing any errors in the logs and CPU consumption should be normal.
Actual Result
clickhouse-1 | 2024.03.25 15:38:16.970267 [ 46 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):
clickhouse-1 |
clickhouse-1 | 0. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x13c4ee8e in /usr/bin/clickhouse
clickhouse-1 | 1. Poco::Net::SocketImpl::peerAddress() @ 0x13c510d6 in /usr/bin/clickhouse
clickhouse-1 | 2. DB::ReadBufferFromPocoSocket::ReadBufferFromPocoSocket(Poco::Net::Socket&, unsigned long) @ 0x101540cd in /usr/bin/clickhouse
clickhouse-1 | 3. DB::HTTPServerRequest::HTTPServerRequest(std::__1::shared_ptr<DB::Context const>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x110e6fd5 in /usr/bin/clickhouse
clickhouse-1 | 4. DB::HTTPServerConnection::run() @ 0x110e5d6e in /usr/bin/clickhouse
clickhouse-1 | 5. Poco::Net::TCPServerConnection::start() @ 0x13c5614f in /usr/bin/clickhouse
clickhouse-1 | 6. Poco::Net::TCPServerDispatcher::run() @ 0x13c57bda in /usr/bin/clickhouse
clickhouse-1 | 7. Poco::PooledThread::run() @ 0x13d89e59 in /usr/bin/clickhouse
clickhouse-1 | 8. Poco::ThreadImpl::runnableEntry(void*) @ 0x13d860ea in /usr/bin/clickhouse
clickhouse-1 | 9. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
clickhouse-1 | 10. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
clickhouse-1 | (version 21.8.13.1.altinitystable (altinity build))
clickhouse-1 | 2024.03.25 15:38:17.081968 [ 513 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):
clickhouse-1 |
clickhouse-1 | 0. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x13c4ee8e in /usr/bin/clickhouse
clickhouse-1 | 1. Poco::Net::SocketImpl::peerAddress() @ 0x13c510d6 in /usr/bin/clickhouse
clickhouse-1 | 2. DB::HTTPServerRequest::HTTPServerRequest(std::__1::shared_ptr<DB::Context const>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x110e6f0b in /usr/bin/clickhouse
clickhouse-1 | 3. DB::HTTPServerConnection::run() @ 0x110e5d6e in /usr/bin/clickhouse
clickhouse-1 | 4. Poco::Net::TCPServerConnection::start() @ 0x13c5614f in /usr/bin/clickhouse
clickhouse-1 | 5. Poco::Net::TCPServerDispatcher::run() @ 0x13c57bda in /usr/bin/clickhouse
clickhouse-1 | 6. Poco::PooledThread::run() @ 0x13d89e59 in /usr/bin/clickhouse
clickhouse-1 | 7. Poco::ThreadImpl::runnableEntry(void*) @ 0x13d860ea in /usr/bin/clickhouse
clickhouse-1 | 8. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
clickhouse-1 | 9. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
clickhouse-1 | (version 21.8.13.1.altinitystable (altinity build))
clickhouse-1 | 2024.03.25 15:38:17.749096 [ 513 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, e.displayText() = Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):
clickhouse-1 |
clickhouse-1 | 0. Poco::Net::SocketImpl::error(int, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x13c4ee8e in /usr/bin/clickhouse
clickhouse-1 | 1. Poco::Net::SocketImpl::peerAddress() @ 0x13c510d6 in /usr/bin/clickhouse
clickhouse-1 | 2. DB::HTTPServerRequest::HTTPServerRequest(std::__1::shared_ptr<DB::Context const>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x110e6f0b in /usr/bin/clickhouse
clickhouse-1 | 3. DB::HTTPServerConnection::run() @ 0x110e5d6e in /usr/bin/clickhouse
clickhouse-1 | 4. Poco::Net::TCPServerConnection::start() @ 0x13c5614f in /usr/bin/clickhouse
clickhouse-1 | 5. Poco::Net::TCPServerDispatcher::run() @ 0x13c57bda in /usr/bin/clickhouse
clickhouse-1 | 6. Poco::PooledThread::run() @ 0x13d89e59 in /usr/bin/clickhouse
clickhouse-1 | 7. Poco::ThreadImpl::runnableEntry(void*) @ 0x13d860ea in /usr/bin/clickhouse
clickhouse-1 | 8. start_thread @ 0x9609 in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so
clickhouse-1 | 9. clone @ 0x122293 in /usr/lib/x86_64-linux-gnu/libc-2.31.so
clickhouse-1 | (version 21.8.13.1.altinitystable (altinity build))
Event ID
No response
Seeing the same thing on 24.3.0. It is unclear when and how it started, but it was not there when we set up the instance initially, nor after upgrading to 24.3.0.
It is also unclear if it has any actual impact on functionality.
@csvan I suspect that clickhouse is causing spikes in the CPU usage and the CPU usage for the server has not been stable
Looking at our internal graphs, I have not noticed any significant deviations in CPU usage.
Do you think having too many projects can cause CPU spikes? I have a total of 67 projects on the Sentry and 23 out of them are actively used for monitoring.
I also came across this, but also saw this in the logs early on while booting:
clickhouse-1 | 2024.03.26 13:59:42.424894 [ 44 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 21.8.13.1.altinitystable (altinity build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
which makes sense as there is no IPv6 in our docker.
I've added a <listen_host>0.0.0.0</listen_host> to clickhouse/config.xml and rebuilt things.
I've now gotten another error from clickhouse (which has scrolled out of my terminal's history unfortunately) about not being able to bind to several ports, but some prodding with nsenter and ss tells me that it was able to bind, and tcpdumping confirms that requests are being made and processed.
Note that I'm not seeing any CPU spikes as well.
(all of this one 24.3.0)
A duplicate of this error is at https://github.com/getsentry/self-hosted/issues/2876. Have you tried the updating to a nightly build past the PR listed there?
@azaslavsky For now I have just rolled back to 24.1.0 and the clickhouse stopped throwing the error, but still seeing a lot of CPU spikes for the server.
Do you think having too many projects can cause CPU spikes? I have a total of 67 projects on the Sentry and 23 out of them are actively used for monitoring.
@mahesh1b to answer this: No, having too many projects doesn't cause CPU spikes. I have 100+ projects with only 8 core CPU and the average CPU usage is around 19% - 24%
I also came across this, but also saw this in the logs early on while booting:
clickhouse-1 | 2024.03.26 13:59:42.424894 [ 44 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, e.displayText() = DNS error: EAI: Address family for hostname not supported (version 21.8.13.1.altinitystable (altinity build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>which makes sense as there is no IPv6 in our docker.
I've added a
<listen_host>0.0.0.0</listen_host>toclickhouse/config.xmland rebuilt things.
@jap There's IPv6 support in Docker though, but it's not the default yet: https://docs.docker.com/config/daemon/ipv6/
Wild guess, but try to change every rust-consumer entries on the docker-compose.yml file to be just consumer, and see if the problem's solved.
@aldy505 For now I have rolled-back to version 24.1.0 as it was production, Should I try this on version 23.4.0 ?
@aldy505 For now I have rolled-back to version 24.1.0 as it was production, Should I try this on version 23.4.0 ?
It's up to you, using consumer works though, as it's not a deprecated command. But if you're not facing any issues by using the consumer instead of the rust-consumer, we might need to consider some things about the usage of Rust consumers.
@aldy505 I have set up a new sentry server with version 23.4.0, I will try it and let know.
I am a bit confused changing consumer will resolve the clickhouse error or the CPU usage.
Thanks.
Just upgraded from 23.9.1 to 24.3.0 and am seeing this connection error. Also events are not being processed by the instance - it seems very broken. I followed the instructions in https://github.com/getsentry/self-hosted/issues/2876#issuecomment-2018114050 to stop using the rust-consumer and add the billing worker and it seems to have fixed the issues for now.
replacing rust-consumer with consumer in the docker-compose.yml file resolved the errors, now I no longer see the error in the clickhouse container.
But am still not sure why the CPU usage is so unstable, I am using the t3a.2xlarge instance
I wouldn't say your CPU usage cure looks unusual tbh. There is a lot going on in Sentry and a straight line is not to be expected.
I understand @csvan, but right now I am using the t3a.2xlarge instance, which I feel is more than enough to run things smoothly. We had an old sentry server with version 23.2.0 with 4 vCPU and 16GB memory and We didn't have any CPU issues for it so I am a bit doubtful. @csvan do you think any feature in sentry might be causing it?
Thank you everyone for all the help, really appreciate all the comments.
We also faced the main error in this issue and resolved it by going back to the non-rust consumers and switching to nightly. Without the switch to nightly Clickhouse still complained.
I was able to solve the problem by switching to the non-rust consumer. I am using version 24.3.0.
Transferring this bug to the snuba team, since it seems like a rust consumer issue.
That's crazy this issue is still present. I've just upgraded sentry from 24.4.1 and again I had to replace rust-consumer with consumer. If this change helps, why you still use rust-consumer in docker-compose file?
same issue here with sentry 2.4.5 version. The workaround works fine, but why it is not fixed in the code repository to avoid need to manually change this to avoid this issue?
btw for everyone else experiencing the Poco::Exception issue now, here is a how to fix it:
- run at root folder:
vim docker-compose.yml - then run this vim global replace command (you can add a
cat the end if you want to confirm each change)::%s/rust-consumer/consumer/g - save file and then run:
docker compose up -d
Hello @lynnagara, sorry for the ping. Do you have any timeline regarding this issue?
same issue here with sentry 2.4.5 version. The workaround works fine, but why it is not fixed in the code repository to avoid need to manually change this to avoid this issue?
btw for everyone else experiencing the
Poco::Exceptionissue now, here is a how to fix it:
- run at root folder:
vim docker-compose.yml- then run this vim global replace command (you can add a
cat the end if you want to confirm each change)::%s/rust-consumer/consumer/g- save file and then run:
docker compose up -d
In our production environment, which is configured and scaled a bit differently from self-hosted, we get far better throughput from Rust consumers than equivalent Python consumers (as would be expected). There's no timeline but we expect at some point deprecate all Python consumers from Snuba.
Hello @lynnagara, sorry for the ping. Do you have any timeline regarding this issue?
Can you or other people who have responded to this issue make it more clear what the problem is?
The original message just contains some error logs from the clickhouse container in a self-hosted environment. Taken alone, I wouldn't assume those are anything more than (recoverable) transient networking issues. That container is now at least one major version behind the lowest major version we support (22.8, soon to move to 23.3).
I'd like to close this issue out, or narrow down the problem (if CPU usage is too high, then on which containers?)
EDIT: I do still see the log messages, just at a way lower frequency than before. They're still annoying, but at least they don't destroy the logs or fill up my disk anymore.
@onewland For what it's worth, I'm no longer seeing the log messages with self-hosted version 24.5.1 and the Snuba rust consumer.
(I also don't see any abnormal CPU usage, but I don't believe I ever have.)
In case it's interesting, I'm running this fork of the self-hosted project, which adds a few environment variables and runs against an external Postgres instance: https://github.com/folio-as/sentry-self-hosted/tree/24.5.1-folio-rust-consumer-2. I don't think my fork affects this issue at all.
I did recently make one change to my setup, though, which I think may be related: Some containers (I can't remember which, unfortunately) were failing to start due to low memory, so I "upgraded" my VM from the recommended 16 GB to 32 GB. (This also added 2 vCPUs).
With 32 GB memory available so it managed to start properly, the full Docker Compose stack now runs comfortably on around 14 GB residential memory on my Debian 11 (bullseye) VM.
This makes it seem to me like the log errors from Clickhouse are, in fact, exposing a real issue – and that 16 GB of memory is just not enough to start the full Compose stack anymore.
It might be helpful if someone else in this thread checked if any of their containers are failing to start, so we could narrow down the root cause of the issue. @lcsvcn, for example, or @christopherowen?
I run 24.5.1 on a an 8-core 32GB VM and am still being absolutely spammed by these logs, so I am not sure the VM size is related.
I run 24.5.1 on a an 8-core 32GB VM and am still being absolutely spammed by these logs, so I am not sure the VM size is related.
I see. Well, it was worth a shot, thanks!
Any update on that? The same problems after upgrading sentry to 24.6.0
I find replacing rust-consumer with consumer as current workaround. That's sad tbh.
I just upgraded my self-hosted stack to 24.6.0, and I'm still seeing the error messages a whole bunch 😞
I just upgraded my self-hosted stack to 24.6.0, and I'm still seeing the error messages a whole bunch 😞
I had the same, changing rust-consumer to consumer in docker-compose.yml helped.
Same issue here https://github.com/getsentry/snuba/issues/5707#issuecomment-2145031588 fixed it.
Clickhouse log was full with
2024.06.13 15:27:25.034661 [ 18085 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):
0. Poco::Net::SocketImpl::error(int, String const&) @ 0x0000000015b3dbf2 in /usr/bin/clickhouse
1. Poco::Net::SocketImpl::peerAddress() @ 0x0000000015b40376 in /usr/bin/clickhouse
2. DB::HTTPServerRequest::HTTPServerRequest(std::shared_ptr<DB::IHTTPContext>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x0000000013154417 in /usr/bin/clickhouse
3. DB::HTTPServerConnection::run() @ 0x0000000013152ba4 in /usr/bin/clickhouse
4. Poco::Net::TCPServerConnection::start() @ 0x0000000015b42834 in /usr/bin/clickhouse
5. Poco::Net::TCPServerDispatcher::run() @ 0x0000000015b43a31 in /usr/bin/clickhouse
6. Poco::PooledThread::run() @ 0x0000000015c7a667 in /usr/bin/clickhouse
7. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015c7893c in /usr/bin/clickhouse
8. ? @ 0x00007f3b5b13f609 in ?
9. ? @ 0x00007f3b5b064353 in ?
(version 23.8.11.29.altinitystable (altinity build))
2024.06.13 15:27:25.491262 [ 18085 ] {} <Error> ServerErrorHandler: Poco::Exception. Code: 1000, e.code() = 107, Net Exception: Socket is not connected, Stack trace (when copying this message, always include the lines below):
0. Poco::Net::SocketImpl::error(int, String const&) @ 0x0000000015b3dbf2 in /usr/bin/clickhouse
1. Poco::Net::SocketImpl::peerAddress() @ 0x0000000015b40376 in /usr/bin/clickhouse
2. DB::ReadBufferFromPocoSocket::ReadBufferFromPocoSocket(Poco::Net::Socket&, unsigned long) @ 0x000000000c896cc6 in /usr/bin/clickhouse
3. DB::HTTPServerRequest::HTTPServerRequest(std::shared_ptr<DB::IHTTPContext>, DB::HTTPServerResponse&, Poco::Net::HTTPServerSession&) @ 0x000000001315451b in /usr/bin/clickhouse
4. DB::HTTPServerConnection::run() @ 0x0000000013152ba4 in /usr/bin/clickhouse
5. Poco::Net::TCPServerConnection::start() @ 0x0000000015b42834 in /usr/bin/clickhouse
6. Poco::Net::TCPServerDispatcher::run() @ 0x0000000015b43a31 in /usr/bin/clickhouse
7. Poco::PooledThread::run() @ 0x0000000015c7a667 in /usr/bin/clickhouse
8. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015c7893c in /usr/bin/clickhouse
9. ? @ 0x00007f3b5b13f609 in ?
10. ? @ 0x00007f3b5b064353 in ?
(version 23.8.11.29.altinitystable (altinity build))
root@sentry01(dc1.prd):~/getsentry/self-hosted (git:(24.6.0))# docker exec -it sentry-self-hosted-clickhouse-1 /bin/bash
root@7271d67d55dc:/# ls -alh /var/log/clickhouse-server/
total 423M
drwxrwxrwx 2 clickhouse clickhouse 23 Jun 13 04:35 .
drwxr-xr-x 5 root root 12 Apr 30 03:07 ..
-rw-r----- 1 clickhouse clickhouse 996M Jun 22 14:56 clickhouse-server.err.log
-rw-r----- 1 clickhouse clickhouse 20M Jun 13 04:35 clickhouse-server.err.log.0.gz
-rw-r----- 1 clickhouse clickhouse 20M Jun 3 06:20 clickhouse-server.err.log.1.gz
-rw-r----- 1 clickhouse clickhouse 19M May 23 15:06 clickhouse-server.err.log.2.gz
-rw-r----- 1 clickhouse clickhouse 19M May 22 03:53 clickhouse-server.err.log.3.gz
-rw-r----- 1 clickhouse clickhouse 19M May 20 21:42 clickhouse-server.err.log.4.gz
-rw-r----- 1 clickhouse clickhouse 19M May 19 15:43 clickhouse-server.err.log.5.gz
-rw-r----- 1 clickhouse clickhouse 19M May 18 10:50 clickhouse-server.err.log.6.gz
-rw-r----- 1 clickhouse clickhouse 19M May 17 05:01 clickhouse-server.err.log.7.gz
-rw-r----- 1 clickhouse clickhouse 19M May 15 23:16 clickhouse-server.err.log.8.gz
-rw-r----- 1 clickhouse clickhouse 996M Jun 22 14:56 clickhouse-server.log
-rw-r----- 1 clickhouse clickhouse 20M Jun 13 04:35 clickhouse-server.log.0.gz
-rw-r----- 1 clickhouse clickhouse 20M Jun 3 06:20 clickhouse-server.log.1.gz
-rw-r----- 1 clickhouse clickhouse 19M May 23 15:06 clickhouse-server.log.2.gz
-rw-r----- 1 clickhouse clickhouse 19M May 22 03:53 clickhouse-server.log.3.gz
-rw-r----- 1 clickhouse clickhouse 19M May 20 21:42 clickhouse-server.log.4.gz
-rw-r----- 1 clickhouse clickhouse 19M May 19 15:43 clickhouse-server.log.5.gz
-rw-r----- 1 clickhouse clickhouse 19M May 18 10:50 clickhouse-server.log.6.gz
-rw-r----- 1 clickhouse clickhouse 19M May 17 05:01 clickhouse-server.log.7.gz
-rw-r----- 1 clickhouse clickhouse 19M May 15 23:16 clickhouse-server.log.8.gz
-rw-r----- 1 clickhouse clickhouse 19M May 14 17:27 clickhouse-server.log.9.gz
root@7271d67d55dc:/#