opensearch-k8s-operator icon indicating copy to clipboard operation
opensearch-k8s-operator copied to clipboard

[BUG] Opensearch does not get deployed

Open sfisli opened this issue 8 months ago • 13 comments

What is the bug?

the opensearch cluster does not get deployed.

What is the expected behavior?

opensearch cluster up and running ( nodes and dashbaord)

What is your host/environment?

Bare Metal Kubernetes v1.26.7

Do you have any additional context?

Operator-values.yaml :

nameOverride: ""
fullnameOverride: ""
domain: monitoring

nodeSelector: {}
tolerations: []
securityContext:
  runAsNonRoot: true
manager:
  securityContext:
    allowPrivilegeEscalation: false
  extraEnv: []
  resources:
    limits:
      cpu: 200m
      memory: 500Mi
    requests:
      cpu: 100m
      memory: 350Mi

  livenessProbe:
    failureThreshold: 3
    httpGet:
      path: /healthz
      port: 8081
    periodSeconds: 15
    successThreshold: 1
    timeoutSeconds: 3
    initialDelaySeconds: 10

  readinessProbe:
    failureThreshold: 3
    httpGet:
      path: /readyz
      port: 8081
    periodSeconds: 15
    successThreshold: 1
    timeoutSeconds: 3
    initialDelaySeconds: 10

  # Set this to false to disable the experimental parallel recovery in case you are experiencing problems
  parallelRecoveryEnabled: true

  image:
    repository: opensearchproject/opensearch-operator
    ## tag default uses appVersion from Chart.yaml, to override specify tag tag: "v1.1"
    tag: ""
    pullPolicy: "Always"

  ## Optional array of imagePullSecrets containing private registry credentials
  imagePullSecrets: []
  # - name: secretName

  dnsBase: cluster.local

  # Log level of the operator. Possible values: debug, info, warn, error
  loglevel: info

  # If a watchNamespace is specified, the manager's cache will be restricted to
  # watch objects in the desired namespace. Defaults is to watch all namespaces.
  watchNamespace:

# Install the Custom Resource Definitions with Helm
installCRDs: true

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Override the service account name. Defaults to opensearch-operator-controller-manager
  name: ""

kubeRbacProxy:
  enable: true
  securityContext:
    # allowPrivilegeEscalation: false
  resources:
    limits:
      cpu: 50m
      memory: 50Mi
    requests:
      cpu: 25m
      memory: 25Mi

  livenessProbe:
    failureThreshold: 3
    httpGet:
      path: /healthz
      port: 10443
      scheme: HTTPS
    periodSeconds: 15
    successThreshold: 1
    timeoutSeconds: 3
    initialDelaySeconds: 10

  readinessProbe:
    failureThreshold: 3
    httpGet:
      path: /healthz
      port: 10443
      scheme: HTTPS
    periodSeconds: 15
    successThreshold: 1
    timeoutSeconds: 3
    initialDelaySeconds: 10

  image:
    repository: "gcr.io/kubebuilder/kube-rbac-proxy"
    tag: "v0.15.0"

opensearch-cluster.yaml:


apiVersion: opensearch.opster.io/v1
kind: OpenSearchCluster
metadata:
  name: adeiz-opensearch-cluster
  namespace: monitoring
spec:
  security:
     tls:
       transport:
         generate: true
         perNode: true
       http:
          generate: true
     config:
       adminCredentialsSecret: # these are the admin credentials for the Operator to use
         name: admin-credentials-secret
       securityConfigSecret:  # this is the whole security configuration for OpenSearch
         name: securityconfig-secret
  general:
    setVMMaxMapCount: true
    serviceName: adeiz-opensearch-cluster
    version: 2.13.0
  dashboards:
    opensearchCredentialsSecret:
      name: admin-credentials-secret
    enable: true
    tls:
      enable: true
      generate: true
    version: 2.13.0
    replicas: 1
    resources:
      requests:
         memory: "512Mi"
         cpu: "200m"
      limits:
         memory: "512Mi"
         cpu: "200m"
  nodePools:
    - component: nodes
      replicas: 2
      diskSize: "10Gi"
      nodeSelector:
      resources:
         requests:
            memory: "2Gi"
            cpu: "1000m"
         limits:
           # memory: "2Gi"
            #cpu: "500m"
      roles:
        - "cluster_manager"
        - "data"

security-config.yaml :

Waiting to connect to the cluster

Operator logs:

For more information, please go to https://github.com/brancz/kube-rbac-proxy/issues/187

===============================================

		
I0614 14:31:32.316748       1 kube-rbac-proxy.go:284] Valid token audiences: 
I0614 14:31:32.316848       1 kube-rbac-proxy.go:378] Generating self signed cert as no cert is provided
I0614 14:31:42.815130       1 kube-rbac-proxy.go:442] Starting TCP socket on 0.0.0.0:8443
I0614 14:31:42.815173       1 kube-rbac-proxy.go:490] Starting TCP socket on 0.0.0.0:10443
I0614 14:31:42.815651       1 kube-rbac-proxy.go:497] Listening securely on 0.0.0.0:10443 for proxy endpoints
I0614 14:31:42.815727       1 kube-rbac-proxy.go:449] Listening securely on 0.0.0.0:8443

sfisli avatar Jun 14 '24 14:06 sfisli