cloud-on-k8s icon indicating copy to clipboard operation
cloud-on-k8s copied to clipboard

ECK operator gets 401 Unauthorized when trying to setup Fleet Server

Open legoguy1000 opened this issue 2 years ago • 22 comments

Bug Report

What did you do? See https://discuss.elastic.co/t/elastic-agent-fleet-setup-unauthorized/317406

We are deploying an ECK cluster on a bare metal k8s cluster. These clusters are short lived and are rebuilt many times so not a permanent environment. This issue is very inconsistent but a lot of times when we deploy, Elasticsearch and Kibana deploy via ECK with no issues BUT when I try to deploy fleet server, the pod for the fleet server agent is never created. When the operator tries to call the Kibana API to setup fleet it returns the below errors. First a bunch of 401s and then eventually it says timeouts.

I have found that most of the time if I delete the ECK operator pod, when the pod is recreated it eventually works and the Fleet Server pod is created. I don't have any issues with the regular agents deployed via ECK once the fleet server is deployed

Also this issue seems to be far more prevalent when I have ECK use a local offline docker registry and there is no access to the internet but IDK if that is just coincidence.

What did you expect to see? The Fleet server pod is created without issues. What did you see instead? Under which circumstances?

Environment

  • ECK version:

2.4.0 and 2.5.0 (current)

  • Kubernetes information:

k8s BareMetal v1.23

$ kubectl version
  • Resource definition:
---
apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: fleet-server
  namespace: default
spec:
  version: 7.17.6
  kibanaRef:
    name: <kibana>
  elasticsearchRefs:
  - name: <ES>
  http:
    service:
      spec:
        type: LoadBalancer
        ports:
        - name: https
          port: 443
          targetPort: 8220
          protocol: TCP
    tls:
      certificate:
        secretName: fleet-server-certificate
  mode: fleet
  fleetServerEnabled: true
  policyID: eck-fleet-server
  deployment:
    replicas: 1
    podTemplate:
      spec:
        securityContext:
          runAsUser: 0
  • Logs:
{"log.level":"error","@timestamp":"2022-11-04T18:51:42.042Z","log.logger":"manager.eck-operator","message":"Reconciler error","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","controller":"agent-controller","object":{"name":"fleet-server","namespace":"default"},"namespace":"default","name":"fleet-server","reconcileID":"952e80a4-c600-432f-8f8c-f7b7034b5bf2","error":"failed to request https://chimera-kb-http.default.svc:5601/api/fleet/setup, status is 401)","errorCauses":[{"error":"failed to request https://chimera-kb-http.default.svc:5601/api/fleet/setup, status is 401)"}],"error.stack_trace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:326\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234"}
{"log.level":"info","@timestamp":"2022-11-04T18:51:42.839Z","log.logger":"license-controller","message":"Starting reconciliation run","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","iteration":"44","namespace":"default","es_name":"chimera"}
{"log.level":"info","@timestamp":"2022-11-04T18:51:42.840Z","log.logger":"license-controller","message":"Ending reconciliation run","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","iteration":"44","namespace":"default","es_name":"chimera","took":0.001236766}
{"log.level":"info","@timestamp":"2022-11-04T18:51:52.284Z","log.logger":"agent-controller","message":"Starting reconciliation run","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","iteration":"43","namespace":"default","agent_name":"fleet-server"}
{"log.level":"info","@timestamp":"2022-11-04T18:51:52.284Z","log.logger":"agent-controller","message":"Updating resource","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","iteration":"43","namespace":"default","agent_name":"fleet-server","kind":"Service","namespace":"default","name":"fleet-server-agent-http"}
{"log.level":"info","@timestamp":"2022-11-04T18:51:52.327Z","log.logger":"agent-controller","message":"Ending reconciliation run","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","iteration":"43","namespace":"default","agent_name":"fleet-server","took":0.042924978}
{"log.level":"error","@timestamp":"2022-11-04T18:51:52.327Z","log.logger":"manager.eck-operator","message":"Reconciler error","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","controller":"agent-controller","object":{"name":"fleet-server","namespace":"default"},"namespace":"default","name":"fleet-server","reconcileID":"f469c1fa-9716-4f5d-aed2-eaec9bb19530","error":"failed to request https://chimera-kb-http.default.svc:5601/api/fleet/setup, status is 401)","errorCauses":[{"error":"failed to request https://chimera-kb-http.default.svc:5601/api/fleet/setup, status is 401)"}],"error.stack_trace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:326\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234"}
{"log.level":"info","@timestamp":"2022-11-04T18:52:12.808Z","log.logger":"agent-controller","message":"Starting reconciliation run","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","iteration":"44","namespace":"default","agent_name":"fleet-server"}
{"log.level":"info","@timestamp":"2022-11-04T18:52:12.808Z","log.logger":"agent-controller","message":"Updating resource","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","iteration":"44","namespace":"default","agent_name":"fleet-server","kind":"Service","namespace":"default","name":"fleet-server-agent-http"}
{"log.level":"info","@timestamp":"2022-11-04T18:53:12.815Z","log.logger":"agent-controller","message":"Ending reconciliation run","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","iteration":"44","namespace":"default","agent_name":"fleet-server","took":60.006738768}
{"log.level":"error","@timestamp":"2022-11-04T18:53:12.815Z","log.logger":"manager.eck-operator","message":"Reconciler error","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","controller":"agent-controller","object":{"name":"fleet-server","namespace":"default"},"namespace":"default","name":"fleet-server","reconcileID":"f89aee9d-a422-4c11-80a0-0e6eeabb9e76","error":"Post \"https://chimera-kb-http.default.svc:5601/api/fleet/setup\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)","errorCauses":[{"error":"Post \"https://chimera-kb-http.default.svc:5601/api/fleet/setup\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"}],"error.stack_trace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:326\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234"}
{"log.level":"info","@timestamp":"2022-11-04T18:53:53.775Z","log.logger":"agent-controller","message":"Starting reconciliation run","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","iteration":"45","namespace":"default","agent_name":"fleet-server"}
{"log.level":"info","@timestamp":"2022-11-04T18:53:53.776Z","log.logger":"agent-controller","message":"Updating resource","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","iteration":"45","namespace":"default","agent_name":"fleet-server","kind":"Service","namespace":"default","name":"fleet-server-agent-http"}
{"log.level":"info","@timestamp":"2022-11-04T18:54:53.788Z","log.logger":"agent-controller","message":"Ending reconciliation run","service.version":"2.5.0+642f9ecd","service.type":"eck","ecs.version":"1.4.0","iteration":"45","namespace":"default","agent_name":"fleet-server","took":60.012719521}

legoguy1000 avatar Nov 04 '22 19:11 legoguy1000

Did you ever figure this out? I'm seeing the same issue

taxilian avatar Feb 06 '23 07:02 taxilian

I'm not sure what fixed it for me, but I:

  • a) upgraded the operator from 2.5.0 to 2.6.1
  • b) moved Fleet deployment from another namespace into the same namespace as ELK (found hints from docs that separate namespaces are not well supported)
  • c) kept deleting/recreating the fleet-server Pod

Finally it somehow worked.

ghost avatar Feb 06 '23 07:02 ghost

I believe its still an issue for however our workaround is to just delete the ECK Operator pod after deploying the Agent resource and then after the ECK operator pod is recreated, Fleet server and then the regular agents are created without issue. Once I upgrade to 2.6.x, i'll have to see if its still something we have to do.

legoguy1000 avatar Feb 06 '23 10:02 legoguy1000

Huh; I finally found I had some bad ServiceAccount definitions (wrong namespace) which I fixed and then this issue went away, and the fleet server and agents all started up, but kibana doesn't seem to know that there is a fleet server. This is probably all just 'cause of stuff I don't understand, though, so I'll keep tinkering.

Richard

On Mon, Feb 6, 2023 at 3:02 AM Alex Resnick @.***> wrote:

I believe its still an issue for however our workaround is to just delete the ECK Operator pod after deploying the Agent resource and then after the ECK operator pod is recreated, Fleet server and then the regular agents are created without issue. Once I upgrade to 2.6.x, i'll have to see if its still something we have to do.

— Reply to this email directly, view it on GitHub https://github.com/elastic/cloud-on-k8s/issues/6144#issuecomment-1418817380, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABWYTQ2UNHPPVFCISOKJNDWWDD4TANCNFSM6AAAAAARXPELPA . You are receiving this because you commented.Message ID: @.***>

taxilian avatar Feb 06 '23 17:02 taxilian

I"m testing upgrade to 2.6.1 and it seems to have resolved the issue

legoguy1000 avatar Feb 09 '23 19:02 legoguy1000

I have been on the 2.6.1 the whole time; it does seem to eventually resolve but I see it each time. I did have to roll back to an older fleet server version (8.5.3 instead of 8.6.1) to get it to come up, though

taxilian avatar Feb 09 '23 20:02 taxilian

I just use the RBAC configs straight from teh YAML from elastic so I don't seem to have an issue with Service Account stuff. IDK. I'm still seeing the 401s in the logs but it doesn't seem to be preventing Fleet server from coming up.

legoguy1000 avatar Feb 09 '23 20:02 legoguy1000

https://github.com/elastic/cloud-on-k8s/issues/6331 https://github.com/elastic/elastic-agent-autodiscover/issues/41

Fleet/Agent on 8.6.x is a known issue, as it working to be resolved by the Agent team.

When testing Fleet with ECK, the Fleet pod will restart a couple times on new installations as Kibana+Elasticsearch because fully healthy, but eventually this should succeed. If you continue to see issues once the above 2 issues are resolved, please feel free to re-open this issue. Thanks.

naemono avatar Feb 15 '23 15:02 naemono

THis issue was present with 8.5.1 as well and ECK 2.5 and I don't see how the issues u linked related to this issue. THis issue is with the ECK operator, not with the agents themselves as there is no agent, thats the whole issue.

legoguy1000 avatar Feb 15 '23 16:02 legoguy1000

@legoguy1000 Perhaps I misunderstood the issue. I'll re-open and do some testing and update when I have more information.

naemono avatar Feb 15 '23 16:02 naemono

@legoguy1000 Can we get your full Kibana manifest/yaml to try an reproduce this please?

also is there anything special about this certificate?

    tls:
      certificate:
        secretName: fleet-server-certificate

Also, are you bringing up ES/Kibana/Fleet all at the same time, or ES/Kibana first, then Fleet at a later date?

naemono avatar Feb 15 '23 22:02 naemono

We use Ansible to deploy everything. First Elasticsearch is deployed and we wait until the cluster is green. Then Kibana, the logstash (via a regulard k8s deployment). Then we deploy fleet server and once fleet server is green, we deploy an agent daemonset. Our Kibana Ansible template

---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: {{ cluster_name }}
spec:
  version: {{ elastic_ver }}
  count: {{ kibana_nodes }}
  elasticsearchRef:
    name: {{ cluster_name }}
    serviceName: elasticsearch-coord
  http:
    tls:
      certificate:
        secretName: kibana-certificate
  secureSettings:
  - secretName: kibana-key-secret-settings
  - secretName: kibana-alert-secret-settings
  config:
    server.publicBaseUrl: https://kibana.{{ domain }}
    uiSettings:
      overrides:
        "doc_table:legacy": true
        "theme:darkMode": true
    telemetry.optIn: false
    telemetry.allowChangingOptInStatus: false
    monitoring.ui.container.elasticsearch.enabled: false
    monitoring.ui.ccs.enabled: false
    xpack.reporting.enabled: true
    elasticsearch.requestTimeout: 100000
    elasticsearch.shardTimeout: 0
    monitoring.kibana.collection.interval: 30000
    xpack.fleet.agents.elasticsearch.hosts: ["https://elasticsearch-ingest.default.svc.cluster.local:9200"]
    xpack.fleet.agents.fleet_server.hosts:
      - "https://fleet-server-agent-http.default.svc"
{% if kit_external_dns != '' %}
      - "https://{{ external_dns }}:6062"
{% endif %}
{% if kit_external_ip != '' and kit_external_dns == '' %}
      - "https://{{ external_ip }}:6062"
{% endif %}
{% if configure_for_offline %}
    xpack.fleet.registryUrl: "http://package-registry.default.svc:8080"
{% endif %}
    xpack.fleet.packages:
      - name: system
        version: latest
      - name: elastic_agent
        version: latest
      - name: fleet_server
        version: latest
      - name: pfsense
        version: latest
    xpack.fleet.agentPolicies:
      - name: Fleet Server on ECK policy
        id: eck-fleet-server
        is_default_fleet_server: true
        is_managed: true
        namespace: default
        unenroll_timeout: 3600
        monitoring_enabled: []
        #   - logs
        package_policies:
        - name: fleet_server-1
          id: fleet_server-1
          package:
            name: fleet_server
      - name: Default Agent
        id: eck-agent
        namespace: default
        monitoring_enabled: []
        #   - logs
        #   - metrics
        unenroll_timeout: 1800
        is_default: true
        package_policies: []
  podTemplate:
    spec:
      containers:
      - name: kibana
        env:
        - name: NEWSFEED_ENABLED
          value: "false"
        - name: NODE_OPTIONS
          value: "--max-old-space-size={{ (kibana_memory * 1024 / 2) | int }}"
        - name: SERVER_MAXPAYLOAD
          value: "2097152"
        resources:
          requests:
            memory: {{ kibana_memory }}Gi
            cpu: {{ kibana_cpu }}
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: kubernetes.io/hostname
          whenUnsatisfiable: ScheduleAnyway
          labelSelector:
            matchLabels:
              common.k8s.elastic.co/type: "kibana"

The certificate is just a plain server certificate issued by Cert Manager via a self signed internal CA

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: fleet-server
  namespace: default
spec:
  # Secret names are always required.
  secretName: fleet-server-certificate
  duration: {{ certmanager.default_cert_length }}
  renewBefore: {{ certmanager.default_cert_renewal }}
  commonName: fleet-server
  subject:
   organizations:
   - "{{ domain }}"
  isCA: false
  privateKey:
    algorithm: RSA
    encoding: PKCS1
    size: 2048
  usages:
    - server auth
  dnsNames:
  - fleet-server
  - fleet-server.{{ domain }}
  - fleet-server.{{ fqdn }}
  - fleet-server-agent-http.default.svc
{% if external_dns != '' %}
  - {{ external_dns }}
{% endif %}
{% if external_ip != '' %}
  ipAddresses:
    - {{ external_ip }}
{% endif %}
  issuerRef:
    name: "{{ certmanager.ca_issuer }}"
    kind: Issuer
    group: cert-manager.io

legoguy1000 avatar Feb 16 '23 01:02 legoguy1000

@legoguy1000 So I tested this again, and here's what I saw:

  1. Laid down es manifest, and it became green
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: testing
spec:
  version: 8.6.0
  nodeSets:
    - name: masters
      count: 3
      config:
        node.roles: ["master", "data"]
        node.store.allow_mmap: false
      podTemplate:
        spec:
          securityContext:
            runAsNonRoot: true
            runAsUser: 1000
            fsGroup: 1000
  1. Laid down Kibana manifest, and it became healthy (tried to mirror yours as much as possible)
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: kibana
spec:
  version: 8.6.0
  count: 1
  config:
    uiSettings:
      overrides:
        "doc_table:legacy": true
        "theme:darkMode": true
    telemetry.optIn: false
    telemetry.allowChangingOptInStatus: false
    monitoring.ui.container.elasticsearch.enabled: false
    monitoring.ui.ccs.enabled: false
    xpack.reporting.enabled: true
    elasticsearch.requestTimeout: 100000
    elasticsearch.shardTimeout: 0
    monitoring.kibana.collection.interval: 30000
    xpack.fleet.agents.elasticsearch.host: "https://testing-es-http.default.svc:9200"
    xpack.fleet.agents.fleet_server.hosts: ["https://fleet-server-agent-http.default.svc:8220"]
    xpack.fleet.packages:
      - name: system
        version: latest
      - name: elastic_agent
        version: latest
      - name: fleet_server
        version: latest
      - name: pfsense
        version: latest
    xpack.fleet.agentPolicies:
      - name: Fleet Server on ECK policy
        id: eck-fleet-server
        is_default_fleet_server: true
        is_managed: true
        namespace: default
        unenroll_timeout: 3600
        monitoring_enabled: []
        #   - logs
        package_policies:
        - name: fleet_server-1
          id: fleet_server-1
          package:
            name: fleet_server
      - name: Default Agent
        id: eck-agent
        namespace: default
        monitoring_enabled: []
        #   - logs
        #   - metrics
        unenroll_timeout: 1800
        is_default: true
        package_policies: []
  elasticsearchRef:
    name: testing
  podTemplate:
    spec:
      containers:
      - name: kibana
        env:
        - name: NEWSFEED_ENABLED
          value: "false"
        - name: SERVER_MAXPAYLOAD
          value: "2097152"
  1. Laid down fleet server manifest
apiVersion: agent.k8s.elastic.co/v1alpha1
kind: Agent
metadata:
  name: fleet-server
  namespace: default
spec:
  version: 8.6.0
  kibanaRef:
    name: kibana
  elasticsearchRefs:
    - name: testing
  mode: fleet
  fleetServerEnabled: true
  deployment:
    replicas: 1
    podTemplate:
      spec:
        serviceAccountName: fleet-server
        automountServiceAccountToken: true
        securityContext:
          runAsUser: 0
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fleet-server
rules:
  - apiGroups: [""]
    resources:
      - pods
      - namespaces
      - nodes
    verbs:
      - get
      - watch
      - list
  - apiGroups: ["coordination.k8s.io"]
    resources:
      - leases
    verbs:
      - get
      - create
      - update
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fleet-server
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: fleet-server
subjects:
  - kind: ServiceAccount
    name: fleet-server
    namespace: default
roleRef:
  kind: ClusterRole
  name: fleet-server
  apiGroup: rbac.authorization.k8s.io

Upon laying down the fleet server manifest, I see the 401 errors in the operator logs

{"log.level":"error","@timestamp":"2023-02-20T15:25:40.959Z","log.logger":"manager.eck-operator","message":"Reconciler error","service.version":"2.6.0+a35bb187","service.type":"eck","ecs.version":"1.4.0","controller":"agent-controller","object":{"name":"fleet-server","namespace":"default"},"namespace":"default","name":"fleet-server","reconcileID":"9042b248-a3a6-451f-959f-b9a8bc798937","error":"failed to request https://kibana-kb-http.default.svc:5601/api/fleet/setup, status is 401)","errorCauses":[{"error":"failed to request https://kibana-kb-http.default.svc:5601/api/fleet/setup, status is 401)"}],"error.stack_trace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:326\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:234"}
{"log.level":"debug","@timestamp":"2023-02-20T15:25:40.959Z","log.logger":"manager.eck-operator.events","message":"Reconciliation error: failed to request https://kibana-kb-http.default.svc:5601/api/fleet/setup, status is 401)","service.version":"2.6.0+a35bb187","service.type":"eck","ecs.version":"1.4.0","type":"Warning","object":{"kind":"Agent","namespace":"default","name":"fleet-server","uid":"655e4eee-d49b-48bf-b8f3-f0d4f2ef7917","apiVersion":"agent.k8s.elastic.co/v1alpha1","resourceVersion":"329734023"},"reason":"ReconciliationError"}

This is expected, as some "association" credentials are being reconciled to the ES instance, which takes some time, and in my case, eventually succeed:

{"log.level":"debug","@timestamp":"2023-02-20T15:26:00.337Z","log.logger":"agent-controller","message":"Fleet API HTTP request","service.version":"2.6.0+a35bb187","service.type":"eck","ecs.version":"1.4.0","iteration":"19","namespace":"default","agent_name":"fleet-server","method":"POST","url":"https://kibana-kb-http.default.svc:5601/api/fleet/setup"}
{"log.level":"debug","@timestamp":"2023-02-20T15:26:01.195Z","log.logger":"agent-controller","message":"Fleet API HTTP request","service.version":"2.6.0+a35bb187","service.type":"eck","ecs.version":"1.4.0","iteration":"19","namespace":"default","agent_name":"fleet-server","method":"GET","url":"https://kibana-kb-http.default.svc:5601/api/fleet/agent_policies?perPage=20&page=1"}
{"log.level":"debug","@timestamp":"2023-02-20T15:26:01.254Z","log.logger":"agent-controller","message":"Fleet API HTTP request","service.version":"2.6.0+a35bb187","service.type":"eck","ecs.version":"1.4.0","iteration":"19","namespace":"default","agent_name":"fleet-server","method":"GET","url":"https://kibana-kb-http.default.svc:5601/api/fleet/enrollment_api_keys?perPage=20&page=1"}

NOTE that it does take a couple minutes for the agent pod to show up in the namespace, but it eventually does show up, and becomes healthy without any intervention from myself.

And I see the agent green

❯ kc get agent -n default
NAME           HEALTH   AVAILABLE   EXPECTED   VERSION   AGE
fleet-server   green    1           1          8.6.0     17m

And the pod running

❯ kc get pod -n default -l common.k8s.elastic.co/type=agent
NAME                                  READY   STATUS    RESTARTS   AGE
fleet-server-agent-75c649c684-x9hfh   1/1     Running   0          17m

Now I am curious about the actual values in this block when things fail for you, as you're behind a cloud loadbalancer:

    xpack.fleet.agents.fleet_server.hosts:
      - "https://fleet-server-agent-http.default.svc"
{% if kit_external_dns != '' %}
      - "https://{{ external_dns }}:6062"
{% endif %}
{% if kit_external_ip != '' and kit_external_dns == '' %}
      - "https://{{ external_ip }}:6062"
{% endif %}
{% if configure_for_offline %}

Could you possibly run this eck-diagnostics tool when things are in this state so we can get a full view on the state of things?

naemono avatar Feb 20 '23 15:02 naemono

So that may be difficult as once I upgraded to eck 2.6.1 I haven't seen the issue anymore. I'm able to deploy fleet server and it comes up within a min or so without any action on my part. The template values are just internal and external IPs for the various agents inside and outside the k8s cluster.

legoguy1000 avatar Feb 20 '23 16:02 legoguy1000

I got the same error, even I upgrade my eck operator version from 2.5.0 to 2.6.1 !

pochingliu131 avatar Feb 22 '23 10:02 pochingliu131

I got the same error, even I upgrade my eck operator version from 2.5.0 to 2.6.1 !

finally I found that my kibana config security setting, if I remove this setting then this error no longer occurs.

pochingliu131 avatar Feb 23 '23 05:02 pochingliu131

I got the same error, even I upgrade my eck operator version from 2.5.0 to 2.6.1 !

finally I found that my kibana config security setting, if I remove this setting then this error no longer occurs.

What setting?

legoguy1000 avatar Feb 23 '23 11:02 legoguy1000

Ran my above manifests with 2.5.0 of ECK operator, and had the exact same results. Eventually worked and all became healthy with no interaction from me. Will eventual close this if no more configuration details come to light from @totoroliu0131

naemono avatar Feb 24 '23 15:02 naemono

I am still seeing this issue with ECK Operator 2.6.2 and using ELK 8.6.2. I am hosting on Openshift 4.11. I saw ECK was bumped to 2.7.0 but it is not available yet on our Openshift instance, maybe is not published as certified yet.

gbschenkel avatar Apr 05 '23 15:04 gbschenkel

@gbschenkel 2.7.0 is now available. There was an issue with certified release on the RedHat side of things. If you have a similar issue to this, please give full details as to your configuration, including full reproducible manifests so we can replicate the problem.

Thank you

naemono avatar Apr 06 '23 20:04 naemono

Hi All, I have similar 401 issue, I have deployed Elasticsearch and Kibana (8.9.1) in my K8s cluster.. and after that I'm trying to install custom resource operator, following this doc https://betterprogramming.pub/managing-elasticsearch-resources-in-kubernetes-39b697908f4e

helm install eck-cr eck-custom-resources/eck-custom-resources-operator - this line created the operator pod and it's healthy..

but when create this index-template yaml and create/apply I'm getting 401 unauthorized error..

I changed the elasticsearch URL details in the values.yaml file, and updated the secrets also.. I have cross checked the secret I used is correct, though it's not working.. any suggestions?

Esakki1211 avatar Oct 31 '23 23:10 Esakki1211