cloud-on-k8s
cloud-on-k8s copied to clipboard
Modify the eck-operator configuration item to change the communication mode from http to https, some pods cannot be restarted, so that the updated pods cannot join the existing cluster
Proposal
Use case. Why is this important?
Bug Report
What did you do?
In an elasticsearch cluster with 3 masters and 2 nodes, modify the configuration item of elasticsearch crd to change the communication mode of http to https
What did you expect to see?
Hope to see the pods restart so that the configuration item takes effect
What did you see instead? Under which circumstances?
the pods can not restart, the cluster communication still is http, and the eck-operator in ApllyingChange status
Environment
linux x86_64
- ECK version:
eck 2.3.0 es 7.10.1
-
Kubernetes information:
for each of them please give us the version you are using
$ kubectl version
v1.20.15
- Resource definition:
if relevant insert the resource definition
- Logs:
insert operator logs or any relevant message to the issue here
like this
@lulihahaha I am unable to replicate this behavior.
I started eck version 2.3.0
, and the following manifest:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: testing
spec:
version: 7.10.1
http:
tls:
selfSignedCertificate:
disabled: true
nodeSets:
- name: masters
count: 3
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard-rwo
config:
node.roles: ["master"]
node.store.allow_mmap: false
- name: data
count: 2
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard-rwo
config:
node.roles: ["data"]
node.store.allow_mmap: false
I waited for the cluster to become healthy, and adjusted to this manifest:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: testing
spec:
version: 7.10.1
# http:
# tls:
# selfSignedCertificate:
# disabled: true
nodeSets:
- name: masters
count: 3
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard-rwo
config:
node.roles: ["master"]
node.store.allow_mmap: false
- name: data
count: 2
volumeClaimTemplates:
- metadata:
name: elasticsearch-data # Do not change this name unless you set up a volume mount for the data path.
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard-rwo
config:
node.roles: ["data"]
node.store.allow_mmap: false
I watched all the pods in the nodeset data
restart, and become healthy, then watched masters-2
, then masters-1
, then masters-0
restart in that order, and become healthy.
As noted in issue template if relevant insert the resource definition
, if you could please add the exact manifest you are using, we could help further, thanks.
On a side-note, I would highly discourage running production kubernetes workloads in the kube-system
namespace, as that is typically reserved for applications that are mission critical for kubernetes itself to function.
Closing due to inactivity - feel free to reopen if needed.