logstash
logstash copied to clipboard
Bad certificates error on running logstash
Kibana Build details:
VERSION: 8.15.1
BUILD: 76534
COMMIT: f66ec5b0ddd990d103489c47ca1bcb97dc50bc6b
Preconditions:
- 8.15.1 Self-managed environment should be available.
Steps to reproduce:
- Create certs using below command:
elasticsearch-certutil ca --pem
elasticsearch-certutil cert --name logstash --ca-cert C:\elk\elasticsearch\ca\ca.crt --ca-key C:\elk\elasticsearch\ca\ca.key --dns <public-dns> --ip <public-ip> --pem
elasticsearch-certutil cert --name client --ca-cert C:\elk\elasticsearch\ca\ca.crt --ca-key C:\elk\elasticsearch\ca\ca.key --dns <public-dns> --ip <public-ip> --pem
- Convert logstash key to
openssl pkcs8 -inform PEM -in logstash.key -topk8 -nocrypt -outform PEM -out logstash.pkcs8.key - Use elasticsearch/config/http_ca.crt as cacert and moved it to
C:\elk\logstash\config\. - Update elastic-agent-pipeline.conf to:
input {
elastic_agent {
port => 5044
ssl => true
ssl_certificate_authorities => ["C:\elk\elasticsearch\ca\ca.crt"]
ssl_certificate => "C:\elk\elasticsearch\logstash\logstash.crt"
ssl_key => "C:\elk\elasticsearch\logstash\logstash.pkcs8.key"
ssl_verify_mode => "force_peer"
}
}
output {
elasticsearch {
hosts => "<elasticsearchhost>"
api_key => "<api_key>"
data_stream => true
ssl => true
cacert => "C:\elk\logstash\config\http_ca.crt"
}
}
- Update pipelines.yml to:
- pipeline.id: elastic-agent-pipeline
path.config: "C:\elk\logstash\config\elastic-agent-pipeline.conf"
- Run logstash using:
logstash -f C:\elk\logstash\config\elastic-agent-pipeline.conf. - Observe certificates error is visible.
Expected Result: Logstash should run without any certificate errors.
Logs: Logs.txt
Screenshot:
Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane)
@muskangulati-qasource Please review.
Secondary review for this ticket is done!
Logstash doesn't like something about the certs, transferring to Logstash as their more likely to have ideas about what the possible sources for the bad_certificate exception are here.
Hi Team,
We have revalidated this issue on 8.16.0 BC1 kibana cloud environment and found it still reproducible.
Observations:
- Bad certificates error on running logstash
Screenshot:
Build details: VERSION: 8.16.0 BC1 BUILD: 79314 COMMIT: 5575428dd3aef69366cddb4ccf07a2a26d30ce48
Please let us know if we are missing anything here. Thanks!!
The netty errors come from the elastic_agent input, so we can disregard the rest of the stack for this issue and concentrate on Agent connecting to Logstash.
Received fatal alert: bad_certificate means Logstash got the message from its peer that it didn't accept the certificate Logstash sent.
Looking at the sequence of steps I was able to start logstash with the certificates generated by elasticsearch, and then use openssl s_client to perform the TLS handshake with Logstash successfully.
If you're able to reproduce this locally maybe we can zoom and see what the details are? It could be something like wrong SAN or CN in the certificate?
Hi @jsvd
Thank you for confirming. We have revalidated this on a fresh setup AWS Windows 2022 server with adding ports Anywhere IPv4 and had below observations:
- We have added ports 5061, 9200, 8220 and 5044 with Anywhere IPv4 access in Security groups of that VM.
- Setup self-managed 8.16.0.
- Created Certs and setup logstash.
- We connected the agent and it successfully loaded the data through logstash output.
Earlier we were trying with restricted access to the specific IPs, which could have caused the issue.
Could you please confirm if we can use this configuration to resolve these bad certs error?
Screenshots:
https://github.com/user-attachments/assets/b09d5bae-b0f5-4cd4-8566-82f7a34588b3
As the issue is resolved, we are closing and marking this as QA:Validated.
Please let us know if anything else is required from our end. Thanks!!