vector icon indicating copy to clipboard operation
vector copied to clipboard

HTTPS sink SSL Alert number 80

Open gsalisbury opened this issue 2 years ago • 16 comments

A note for the community

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Problem

I have vector configured on CentOS7 hosts sending logs to a central vector using a HTTP sink/source using TLS with client certificates.

The sending vectors log regular errors:

Apr 14 23:25:33 myhost vector[44196]: 2022-04-14T13:25:33.257534Z  WARN sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=4}: vector::sinks::util::retries: Retrying after error. error=Failed to make HTTP(S) request: error trying to connect: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:ssl/record/rec_layer_s3.c:1544:SSL alert number 80
From https://datatracker.ietf.org/doc/html/rfc5246#appendix-A.3, alert number 80 is an internal error.

Both sender and receiver vector versions are:
vector 0.20.0 (x86_64-unknown-linux-gnu 2a706a3 2022-02-11)
Both sender and receiver hosts are CentOS7, with OpenSSL 1.0.2k-fips  26 Jan 2017


### Configuration

```text
---sender---
type = "http"
inputs = [
  "journald",
]
compression = "gzip"
encoding = "ndjson"
tls.ca_file = "/etc/ssl/certs/ca.pem"
tls.crt_file = "/etc/ssl/certs/myhost.pem"
tls.enabled = true
tls.key_file = "/etc/ssl/private_keys/myhost.pem"
tls.verify_certificate = true
uri = "https://receiver:501"

---receiver---
type = "http"
address = "10.0.200.15:501"
framing.method = "newline_delimited"
tls.ca_file = "/etc/ssl/certs/ca.pem"
tls.crt_file = "/etc/ssl/certs/receiver.pem"
tls.enabled = true
tls.key_file = "/etc/ssl/private_keys/receiver.pem"
tls.verify_certificate = true

Version

vector 0.20.0 (x86_64-unknown-linux-gnu 2a706a3 2022-02-11)

Debug Output

2022-04-14T14:25:32.600128Z TRACE vector: Beep.
2022-04-14T14:25:33.254121Z TRACE sink{component_kind="sink" component_id=http component_type=http component_name=http}: vector::sinks::util::sink: Service ready; Sending batch.
2022-04-14T14:25:33.254187Z TRACE sink{component_kind="sink" component_id=http component_type=http component_name=http}: vector::sinks::util::sink: Submitting service request. in_flight_requests=1
2022-04-14T14:25:33.255217Z DEBUG sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=2}:http: vector::internal_events::http_client: Sending HTTP request. uri=https://receiver.example.com:501/ method=POST version=HTTP/1.1 headers={"content-type": "application/x-ndjson", "content-encoding": "gzip", "user-agent": "Vector/0.20.0 (x86_64-unknown-linux-gnu 2a706a3 2022-02-11)", "accept-encoding": "identity"} body=[1728 bytes]
2022-04-14T14:25:33.256303Z DEBUG sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=2}:http: vector::internal_events::http_client: HTTP error. error=error trying to connect: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:ssl/record/rec_layer_s3.c:1544:SSL alert number 80
2022-04-14T14:25:33.256367Z  WARN sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=2}: vector::sinks::util::retries: Retrying after error. error=Failed to make HTTP(S) request: error trying to connect: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:ssl/record/rec_layer_s3.c:1544:SSL alert number 80
2022-04-14T14:25:33.256385Z DEBUG sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=2}: vector::sinks::util::retries: Retrying request. delay_ms=1000
2022-04-14T14:25:33.599960Z TRACE vector: Beep.
2022-04-14T14:25:34.258814Z DEBUG sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=2}:http: vector::internal_events::http_client: Sending HTTP request. uri=https://receiver.example.com:501/ method=POST version=HTTP/1.1 headers={"content-type": "application/x-ndjson", "content-encoding": "gzip", "user-agent": "Vector/0.20.0 (x86_64-unknown-linux-gnu 2a706a3 2022-02-11)", "accept-encoding": "identity"} body=[1728 bytes]
2022-04-14T14:25:34.289470Z DEBUG sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=2}:http: vector::internal_events::http_client: HTTP response. status=200 OK version=HTTP/1.1 headers={"content-length": "0", "date": "Thu, 14 Apr 2022 14:25:33 GMT"} body=[empty]
2022-04-14T14:25:34.289546Z TRACE sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=2}: vector::internal_events::common: Bytes sent. byte_size=1728 protocol=https endpoint=https://receiver.example.com:501/
2022-04-14T14:25:34.289573Z TRACE sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=2}: vector::internal_events::adaptive_concurrency: Changed concurrency. concurrency=1 reached_limit=true had_back_pressure=false current_rtt=Some(1.035237648s) past_rtt=231.674877ms past_rtt_deviation=246.289787ms
2022-04-14T14:25:34.289604Z TRACE sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=2}: vector::sinks::util::sink: Response successful. response=Response { status: 200, version: HTTP/1.1, headers: {"content-length": "0", "date": "Thu, 14 Apr 2022 14:25:33 GMT"}, body: b"" }
2022-04-14T14:25:34.289623Z TRACE sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=2}: vector_common::internal_event::events_sent: Events sent. count=16 byte_size=43208
2022-04-14T14:25:34.289654Z TRACE sink{component_kind="sink" component_id=http component_type=http component_name=http}: vector::sinks::util::sink: Acking events. acking_num=16
2022-04-14T14:25:34.599514Z TRACE vector: Beep.
2022-04-14T14:25:35.599929Z TRACE vector: Beep.

Example Data

Apr 14 23:25:33 myhost vector[44196]: 2022-04-14T13:25:33.257534Z WARN sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=4}: vector::sinks::util::retries: Retrying after error. error=Failed to make HTTP(S) request: error trying to connect: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:ssl/record/rec_layer_s3.c:1544:SSL alert number 80 Apr 14 23:35:29 myhost vector[44196]: 2022-04-14T13:35:29.419515Z WARN sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=5}: vector::sinks::util::retries: Retrying after error. error=Failed to make HTTP(S) request: error trying to connect: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:ssl/record/rec_layer_s3.c:1544:SSL alert number 80 Apr 14 23:40:29 myhost vector[44196]: 2022-04-14T13:40:29.429999Z WARN sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=7}: vector::sinks::util::retries: Retrying after error. error=Failed to make HTTP(S) request: error trying to connect: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:ssl/record/rec_layer_s3.c:1544:SSL alert number 80 Apr 14 23:45:29 myhost vector[44196]: 2022-04-14T13:45:29.587987Z WARN sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=8}: vector::sinks::util::retries: Retrying after error. error=Failed to make HTTP(S) request: error trying to connect: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:ssl/record/rec_layer_s3.c:1544:SSL alert number 80 Apr 14 23:53:02 myhost vector[44196]: 2022-04-14T13:53:02.398311Z WARN sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=9}: vector::sinks::util::retries: Retrying after error. error=Failed to make HTTP(S) request: error trying to connect: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:ssl/record/rec_layer_s3.c:1544:SSL alert number 80

Additional Context

The receiver's certificate has multiple Subject Alternative Names, one of which is the hostname used by the senders. The Subject: CN=other.example.com, where the sender vector agents use receiver.example.com as the HTTP sink hostname.

References

No response

gsalisbury avatar Apr 14 '22 14:04 gsalisbury

Hi @gsalisbury !

Thanks for this report.

To help narrow down the issue, I'm curious if you are able to use curl to send data successfully to the listening Vector instance using the same certificate and CA.

Do you have a load balancer in the middle? If so, could you provide details about its configuration?

jszwedko avatar Apr 14 '22 16:04 jszwedko

Hi @jszwedko, yep sorry should have mentioned, this is happening on directly connected vectors, no load balancer.

Curl can send data successfully using the same client certificate and CA.

Note vector does connect successfully also when it retries, data is still getting delivered but there are many times that it errors on first attempt. Other times it connects and transmits successfully on first attempt.

gsalisbury avatar Apr 16 '22 11:04 gsalisbury

The code "SSL alert number 80" references an internal error not related to the content of the connection data or certificates. This is possibly happening within openssl and not in Vector. Does the receiving Vector instance report any errors or warnings when this happens?

bruceg avatar Apr 18 '22 16:04 bruceg

There is no errors or warnings on the receiving vector instance when his happens, have run the receiving vector with -vvv option also and nothing is logged.

gsalisbury avatar Apr 18 '22 22:04 gsalisbury

Here is a packet capture from the receiver host, listening on 10.0.200.15:501.

From 13:50:02.470557 to 13:50:02.471503 is the connection with the SSL Alert number 80 error logged on the vector client.

From 13:50:03.473587 is the successful retry, I've only included the handshake a and the first data packets.

13:50:02.470557 IP 10.0.200.14.6303 > 10.0.200.15.501: Flags [S], seq 1064564864, win 29200, options [mss 1460,sackOK,TS val 222228123 ecr 0,nop,wscale 7], length 0
13:50:02.470596 IP 10.0.200.15.501 > 10.0.200.14.6303: Flags [S.], seq 45064217, ack 1064564865, win 28960, options [mss 1460,sackOK,TS val 222029730 ecr 222228123,nop,wscale 10], length 0
13:50:02.470797 IP 10.0.200.14.6303 > 10.0.200.15.501: Flags [.], seq 1, ack 1, win 229, options [nop,nop,TS val 222228123 ecr 222029730], length 0
13:50:02.471103 IP 10.0.200.14.6303 > 10.0.200.15.501: Flags [P.], seq 1:1933, ack 1, win 229, options [nop,nop,TS val 222228123 ecr 222029730], length 1932
13:50:02.471127 IP 10.0.200.15.501 > 10.0.200.14.6303: Flags [.], seq 1, ack 1933, win 33, options [nop,nop,TS val 222029730 ecr 222228123], length 0
13:50:02.471268 IP 10.0.200.15.501 > 10.0.200.14.6303: Flags [P.], seq 1:8, ack 1933, win 33, options [nop,nop,TS val 222029731 ecr 222228123], length 7
13:50:02.471313 IP 10.0.200.15.501 > 10.0.200.14.6303: Flags [F.], seq 8, ack 1933, win 33, options [nop,nop,TS val 222029731 ecr 222228123], length 0
13:50:02.471388 IP 10.0.200.14.6303 > 10.0.200.15.501: Flags [.], seq 1933, ack 8, win 229, options [nop,nop,TS val 222228123 ecr 222029731], length 0
13:50:02.471491 IP 10.0.200.14.6303 > 10.0.200.15.501: Flags [F.], seq 1933, ack 9, win 229, options [nop,nop,TS val 222228124 ecr 222029731], length 0
13:50:02.471503 IP 10.0.200.15.501 > 10.0.200.14.6303: Flags [.], seq 9, ack 1934, win 33, options [nop,nop,TS val 222029731 ecr 222228124], length 0
13:50:03.473587 IP 10.0.200.14.6307 > 10.0.200.15.501: Flags [S], seq 1900168771, win 29200, options [mss 1460,sackOK,TS val 222229126 ecr 0,nop,wscale 7], length 0
13:50:03.473610 IP 10.0.200.15.501 > 10.0.200.14.6307: Flags [S.], seq 744277305, ack 1900168772, win 28960, options [mss 1460,sackOK,TS val 222030733 ecr 222229126,nop,wscale 10], length 0
13:50:03.473741 IP 10.0.200.14.6307 > 10.0.200.15.501: Flags [.], seq 1, ack 1, win 229, options [nop,nop,TS val 222229126 ecr 222030733], length 0
13:50:03.474520 IP 10.0.200.14.6307 > 10.0.200.15.501: Flags [P.], seq 1:518, ack 1, win 229, options [nop,nop,TS val 222229127 ecr 222030733], length 517
13:50:03.474532 IP 10.0.200.15.501 > 10.0.200.14.6307: Flags [.], seq 1, ack 518, win 30, options [nop,nop,TS val 222030734 ecr 222229127], length 0
13:50:03.483075 IP 10.0.200.15.501 > 10.0.200.14.6307: Flags [P.], seq 1:2218, ack 518, win 30, options [nop,nop,TS val 222030742 ecr 222229127], length 2217
13:50:03.483295 IP 10.0.200.14.6307 > 10.0.200.15.501: Flags [.], seq 518, ack 2218, win 263, options [nop,nop,TS val 222229135 ecr 222030742], length 0
13:50:03.496576 IP 10.0.200.14.6307 > 10.0.200.15.501: Flags [P.], seq 518:2600, ack 2218, win 263, options [nop,nop,TS val 222229149 ecr 222030742], length 2082

gsalisbury avatar Apr 19 '22 04:04 gsalisbury

This is still a problem on version 0.28.1.

It would be great if it can be looked into further. Currently working around this by using stunnel to terminate the TLS connections from downstream vector log shippers on our vector aggregator hosts.

gsalisbury avatar Mar 21 '23 02:03 gsalisbury

Hi @gsalisbury, apologies for the delayed response, this issue fell through the cracks. Is this still happening if you use the latest vector version?

pront avatar Aug 18 '23 17:08 pront

Hi @pront,

It is still happening with the latest/newish vector version:

[email protected] ~ # vector --version
vector 0.31.0 (x86_64-unknown-linux-gnu 0f13b22 2023-07-06 13:52:34.591204470)

[email protected] ~ # cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)

[email protected] ~ # openssl version
OpenSSL 1.0.2k-fips  26 Jan 2017

[email protected] ~ # journalctl --no-pager -u vector --since='24 hours ago' | grep 'tlsv1 alert' | wc -l
64

[email protected] ~ # journalctl --no-pager -u vector --since='24 hours ago' | grep 'tlsv1 alert' | tail -n1
Aug 21 22:35:31 host.example.com vector[28792]: 2023-08-21T12:35:31.954857Z  WARN sink{component_kind="sink" component_id=http component_type=http component_name=http}:request{request_id=16460}: vector::sinks::util::retries: Retrying after error. error=Failed to make HTTP(S) request: error trying to connect: error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:ssl/record/rec_layer_s3.c:1563:SSL alert number 80 internal_log_rate_limit=true

[email protected] ~ # cat /etc/vector/sinks/http.toml
type = "http"
inputs = [
  "journald",
]
compression = "gzip"
encoding.codec = "json"
framing.method = "newline_delimited"
method = "post"
tls.ca_file = "/etc/ssl/certs/ca.pem"
tls.crt_file = "/etc/ssl/certs/cert.pem"
tls.key_file = "/etc/ssl/private_keys/key.pem"
tls.verify_certificate = true
uri = "https://logs.example.com:501"

gsalisbury avatar Aug 22 '23 02:08 gsalisbury

I am unable to reproduce this issue. I'm using centos7 docker images, with self signed certificates. I get no errors.

@gsalisbury perhaps you could indicate how you are generating your certificates? It's possible it's down to a certain configuration there.

StephenWakely avatar Sep 14 '23 12:09 StephenWakely

Hi @StephenWakely, thank you for taking a look at this.

We are using our Puppet infrastructure certificates as they already exist on all our nodes.

Using stunnel to terminate TLS works fine with the same certificates, and I was using this as a test / workaround to avoid the noise from vector intermittently failing to connect. The setup then was: vector shipper -> plaintext http sink -> local stunnel -> remote stunnel on collector node -> plaintext http vector source

gsalisbury avatar Sep 25 '23 03:09 gsalisbury

Hi @gsalisbury , I too have not been able to repro this yet.

One thing of note, openssl 1.0.2 is EOL. Does this issue reproduce for you with openssl 3 ?

neuronull avatar Oct 24 '23 18:10 neuronull

Hello,

I'm facing the same bug with the latest version.

vector 0.33.0 (x86_64-unknown-linux-gnu 89605fb 2023-09-27 14:18:24.180809939)
OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)

My sink configuration is the following.

sinks:
  http_out:
    type: http
    inputs:
      - my_filter
    encoding:
      codec: json
    uri: "https://remote-http-endpoint.com:24224"
    tls:
      ca_file: /etc/vector/tls/ca.crt
      crt_file: /etc/vector/tls/http.crt
      key_file: /etc/vector/tls/http.key
      verify_certificate: true
      verify_hostname: true

I was able to collect the following debug logs.

2023-10-25T12:21:17.824839Z DEBUG hyper::proto::h1::io: parsed 5 headers
2023-10-25T12:21:17.824912Z DEBUG hyper::proto::h1::conn: incoming body is empty
2023-10-25T12:21:17.826909Z DEBUG sink{component_kind="sink" component_id=metrics_out component_type=prometheus_exporter component_name=metrics_out}: vector::internal_events::prometheus: Request to prometheus server complete. status_code=200 OK
2023-10-25T12:21:17.827203Z DEBUG hyper::proto::h1::io: flushed 91670 bytes
2023-10-25T12:21:22.447462Z DEBUG sink{component_kind="sink" component_id=metrics_out component_type=prometheus_exporter component_name=metrics_out}: vector::utilization: utilization=0.000000041862824020523594
2023-10-25T12:21:22.447573Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}: vector::utilization: utilization=0.000000042052472447409205
2023-10-25T12:21:23.913599Z DEBUG hyper::proto::h1::decode: incoming chunked header: 0x132 (306 bytes)
2023-10-25T12:21:24.916096Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=5}:http: vector::internal_events::http_client: Sending HTTP request. uri=https://remote-http-endpoint.com:24224/ method=POST version=HTTP/1.1 headers={"content-type": "application/json", "accept-encoding": "gzip", "user-agent": "Vector/0.33.0 (x86_64-unknown-linux-gnu 89605fb 2023-09-27 14:18:24.180809939)"} body=[602 bytes]
2023-10-25T12:21:24.916517Z DEBUG hyper::client::connect::dns: resolving host="remote-http-endpoint.com"
2023-10-25T12:21:24.919174Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=5}:http: hyper::client::connect::http: connecting to 10.47.3.18:24224
2023-10-25T12:21:24.919959Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=5}:http: hyper::client::connect::http: connected to 10.47.3.18:24224
2023-10-25T12:21:24.928495Z  WARN sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=5}:http: vector::internal_events::http_client: HTTP error. error=error trying to connect: error:0A000438:SSL routines:(unknown function):tlsv1 alert internal error:ssl/record/rec_layer_s3.c:1586:SSL alert number 80 error_type="request_failed" stage="processing" internal_log_rate_limit=true
2023-10-25T12:21:24.928592Z  WARN sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=5}: vector::sinks::util::retries: Retrying after error. error=Failed to make HTTP(S) request: error trying to connect: error:0A000438:SSL routines:(unknown function):tlsv1 alert internal error:ssl/record/rec_layer_s3.c:1586:SSL alert number 80 internal_log_rate_limit=true
2023-10-25T12:21:24.928609Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=5}: vector::sinks::util::retries: Retrying request. delay_ms=1000
2023-10-25T12:21:25.929816Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=5}:http: vector::internal_events::http_client: Sending HTTP request. uri=https://remote-http-endpoint.com:24224/ method=POST version=HTTP/1.1 headers={"content-type": "application/json", "accept-encoding": "gzip", "user-agent": "Vector/0.33.0 (x86_64-unknown-linux-gnu 89605fb 2023-09-27 14:18:24.180809939)"} body=[602 bytes]
2023-10-25T12:21:25.930120Z DEBUG hyper::client::connect::dns: resolving host="remote-http-endpoint.com"
2023-10-25T12:21:25.932932Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=5}:http: hyper::client::connect::http: connecting to 10.47.3.18:24224
2023-10-25T12:21:25.933554Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=5}:http: hyper::client::connect::http: connected to 10.47.3.18:24224
2023-10-25T12:21:25.955748Z DEBUG hyper::proto::h1::io: flushed 868 bytes
2023-10-25T12:21:25.959910Z DEBUG hyper::proto::h1::io: parsed 3 headers
2023-10-25T12:21:25.959981Z DEBUG hyper::proto::h1::conn: incoming body is empty
2023-10-25T12:21:25.960054Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=5}:http: hyper::client::pool: pooling idle connection for ("https", remote-http-endpoint.com:24224)
2023-10-25T12:21:25.960082Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=5}:http: vector::internal_events::http_client: HTTP response. status=200 OK version=HTTP/1.1 headers={"content-type": "text/plain", "connection": "Keep-Alive", "content-length": "0"} body=[empty]
2023-10-25T12:21:26.331618Z DEBUG hyper::proto::h1::decode: incoming chunked header: 0x14F (335 bytes)
2023-10-25T12:21:26.336517Z DEBUG hyper::proto::h1::decode: incoming chunked header: 0x122 (290 bytes)
2023-10-25T12:21:27.333572Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=6}:http: vector::internal_events::http_client: Sending HTTP request. uri=https://remote-http-endpoint.com:24224/ method=POST version=HTTP/1.1 headers={"content-type": "application/json", "accept-encoding": "gzip", "user-agent": "Vector/0.33.0 (x86_64-unknown-linux-gnu 89605fb 2023-09-27 14:18:24.180809939)"} body=[1214 bytes]
2023-10-25T12:21:27.333674Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=6}:http: hyper::client::pool: reuse idle connection for ("https", remote-http-endpoint.com:24224)
2023-10-25T12:21:27.333879Z DEBUG hyper::proto::h1::io: flushed 1481 bytes
2023-10-25T12:21:27.339279Z DEBUG hyper::proto::h1::io: parsed 3 headers
2023-10-25T12:21:27.339312Z DEBUG hyper::proto::h1::conn: incoming body is empty
2023-10-25T12:21:27.339363Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=6}:http: hyper::client::pool: pooling idle connection for ("https", remote-http-endpoint.com:24224)
2023-10-25T12:21:27.339389Z DEBUG sink{component_kind="sink" component_id=http_out component_type=http component_name=http_out}:request{request_id=6}:http: vector::internal_events::http_client: HTTP response. status=200 OK version=HTTP/1.1 headers={"content-type": "text/plain", "connection": "Keep-Alive", "content-length": "0"} body=[empty]

The remote HTTP server is a Fluentd HTTP input plugin. Don't hesitate if you need more logs of details if it can help.

Thanks a lot!

aerialls avatar Oct 25 '23 12:10 aerialls

Hey @aerialls , thanks much for adding your use case and data points!

That's helpful to know you are running with openssl 3

I've been reading up on this (basically if you do a web search for

0A000438:SSL routines:(unknown function):tlsv1 alert internal error:ssl/record/rec_layer_s3.c:1586:SSL alert number 80

, there are a number of threads discussing this issue.

The most relevant and potentially could describe what we are seeing here I could find is:

https://github.com/openssl/openssl/issues/19518

, which basically states that this issue could happen if there is a discrepancy between the openssl versions between the client and server and if the server version is an old openssl version, and that the error we see is an internal error reported by the server (which matches the analysis from the Vector logs).

In particular, the original report of this Vector issue was that both client and server were openssl 1.0.2k-fips (old version).

In your case, the client is a newer openssl 3 version. But what do we know of the server?

In the my local repro attempts I've so far been unable to repro it with both client and server on openssl3.

So if the theory holds, then this is likely not something Vector can really do anything about.

To help validate the theory I think these might be good next steps:

  1. Do you see the same error when you use CURL to that remote fluentd plugin server? In some of the websearches I've seen, folks are able to repro this issue with CURL.

  2. I will see if I can try a local reproduction with the Vector<->Vector ; client on the openssl 3 and server on an older 1.x version.

neuronull avatar Oct 26 '23 21:10 neuronull

Results of my local experiment: I haven't been able to repro it still.

It could be something with the certificates still, but I think the client/server openssl version thing could still be a factor.

My setup was:

Client: vector 0.33.0 , openssl 3.1.3 Server: vector 0.33.0, openssl 1.1.1

sender.toml

[sources.input]
type = "socket"
address = "0.0.0.0:1234"
mode = "tcp"
path = "/tmp/vector.sock"

[sinks.output]
type = "http"
inputs = [ "input" ]
compression = "gzip"
encoding.codec = "json"
framing.method = "newline_delimited"
tls.ca_file = "tests/data/ca/certs/ca.cert.pem"
tls.crt_file = "tests/data/ca/intermediate_client/certs/localhost-chain.cert.pem"
tls.key_file = "tests/data/ca/intermediate_client/private/localhost.key.pem"
tls.verify_certificate = true
uri = "http://0.0.0.0:5002"

receiver.toml

[sources.input]
type = "http_server"
address = "0.0.0.0:5002"
tls.ca_file = "tests/data/ca/certs/ca.cert.pem"
tls.crt_file = "tests/data/ca/intermediate_server/certs/localhost-chain.cert.pem"
tls.key_file = "tests/data/ca/intermediate_server/private/localhost.key.pem"
tls.verify_certificate = true
framing.method = "newline_delimited"
decoding.codec = "json"

[sinks.output]
type = "console"
inputs = [ "input" ]
encoding.codec = "json"

neuronull avatar Oct 27 '23 21:10 neuronull

Same thing with vector sink. It appears randomly between different OSes and disappears also ramdomly after next vector or system upgrade I think it's related to openssl but openssl s_client works well.

Reonaydo avatar Feb 07 '24 20:02 Reonaydo

I reproduce the same issue with vector sink:

[sinks.vectorInput]
  type        = "vector"
  inputs      = ["defaultChecks", "catchAbortAndError"]
  address     = "${VECTOR_INPUT}:6001" # bug requires to hard-code port here: https://github.com/vectordotdev/helm-charts/issues/348
  compression = false
  healthcheck = true
  version     = "2"

  [sinks.vectorInput.tls]
    enabled            = true
    ca_file            = "/etc/${VECTOR_USER_CERT}/ca.crt"
    crt_file           = "/etc/${VECTOR_USER_CERT}/tls.crt"
    key_file           = "/etc/${VECTOR_USER_CERT}/tls.key"
    key_pass           = ""
    verify_certificate = true
    verify_hostname    = false

vector source:

[sources.vector]
  type    = "vector"
  address = "0.0.0.0:6001"
  version = "2"
  [sources.vector.tls]
    enabled            = true
    ca_file            = "/etc/vector-${VECTOR_USER}-tls/ca.crt"
    crt_file           = "/etc/vector-${VECTOR_USER}-tls/tls.crt"
    key_file           = "/etc/vector-${VECTOR_USER}-tls/tls.key"
    verify_certificate = true

both containers are running with:

vector 0.35.0 (x86_64-unknown-linux-musl e57c0c0 2024-01-08 14:42:10.103908779)
libcrypto3-3.1.4-r1 x86_64 {openssl} (Apache-2.0) [installed]
libssl3-3.1.4-r1 x86_64 {openssl} (Apache-2.0) [installed]

the errors:

2024-04-10T08:15:19.505533Z  WARN sink{component_kind="sink" component_id=vectorInput component_type=vector}:request{request_id=828}: vector::sinks::util::retries: Retrying after error. error=Request failed: status: Unavailable, message: "error trying to connect: error:0A000438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:ssl/record/rec_layer_s3.c:861:SSL alert number 80", details: [], metadata: MetadataMap { headers: {} } internal_log_rate_limit=true

traefik proxy is in between, but I have same error with direct connection between vector agent / aggregator.

Some events are still going through though, but due to high volume it is really challenging to troubleshoot which ones are dropped.

p.s. it would be helpful if this case could be re-qualified as cross sink/source related instead of "sink: http" only

DimDroll avatar Apr 10 '24 08:04 DimDroll