nats-streaming-operator icon indicating copy to clipboard operation
nats-streaming-operator copied to clipboard

Failed to start: discovered another streaming server with cluster ID "example-stan"

Open veerapatyok opened this issue 5 years ago • 25 comments

I got error when I deploy NatsStreamingCluster

[1] 2019/12/26 07:16:45.762521 [FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "example-stan"

I use GKE

full message

[1] 2019/12/26 07:16:45.747712 [INF] STREAM: ServerID: JTmPHIR4BFp2ZuAWkekcIl
[1] 2019/12/26 07:16:45.747715 [INF] STREAM: Go version: go1.11.13
[1] 2019/12/26 07:16:45.747717 [INF] STREAM: Git commit: [910d6e1]
[1] 2019/12/26 07:16:45.760913 [INF] STREAM: Recovering the state...
[1] 2019/12/26 07:16:45.761073 [INF] STREAM: No recovered state
[1] 2019/12/26 07:16:45.762399 [INF] STREAM: Shutting down.
[1] 2019/12/26 07:16:45.762521 [FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "example-stan"

veerapatyok avatar Dec 26 '19 07:12 veerapatyok

I'm getting the same error, on a dev k3d cluster.

makkus avatar Jan 08 '20 23:01 makkus

I'm also seeing this error when installing via the instructions here: https://github.com/nats-io/nats-streaming-operator#deploying-a-nats-streaming-cluster

timjkelly avatar Jan 13 '20 15:01 timjkelly

Is not working any more. same error

bfalese-navent avatar Jan 16 '20 18:01 bfalese-navent

Stuck with the same problem, only one replica is nats streaming pod is working. All other exit wit the same error.

[FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "example-stan"

dannylesnik avatar Jan 22 '20 09:01 dannylesnik

Having the same issue

kelvin-yue-scmp avatar Jan 31 '20 02:01 kelvin-yue-scmp

Same

maertu avatar Feb 04 '20 15:02 maertu

I have temporary solution: I made nat-streaming-cluster.yaml and inside a file I added

config:
    debug: true

nat-streaming-cluster.yaml

---
apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
  name: "example-stan"
spec:
  # Number of nodes in the cluster
  size: 3

  # NATS Streaming Server image to use, by default
  # the operator will use a stable version
  #
  image: "nats-streaming:latest"

  # Service to which NATS Streaming Cluster nodes will connect.
  #
  natsSvc: "example-nats"

 config:
    debug: true

veerapatyok avatar Feb 04 '20 16:02 veerapatyok

I change to KubeMQ

veerapatyok avatar Mar 17 '20 06:03 veerapatyok

Any update on this issue? The same behaviour on EKS. If I keep retrying it works eventually, however, when there is a pod restart it starts happening again.

hasanovkhalid avatar Apr 20 '20 17:04 hasanovkhalid

the same issue for me

sneerin avatar May 19 '20 03:05 sneerin

After trying the config above I get the error: [FTL] STREAM: Failed to start: failed to join Raft group example-stan. I am able to create a working nats+stan configuration by using the statefulsets here: https://docs.nats.io/nats-on-kubernetes/minimal-setup#ha-setup-using-statefulsets

lundbird avatar May 21 '20 21:05 lundbird

Same problem, and adding the "debug: true" worked for me once, but unpredictably on the next few attempts (had to delete and apply the cluster a few times).

For my configuration I suspect that it may be a timing issue with NATS Streaming racing my Envoy proxy sidecar (I have Istio installed in my cluster) and that by adding the "debug: true" NATS Streaming takes a bit longer to boot up, giving Envoy enough time to be ready. Bit of a tricky one to debug as the images are based on scratch with no real ability to inject a sleep as part of the image cmd.

Am I the only one using Istio, or is this a common theme?

drshade avatar Jun 17 '20 13:06 drshade

I have the same issue, is there anyone can help.

[1] [INF] STREAM: Starting nats-streaming-server[stan-service] version 0.16.2
[1] [INF] STREAM: ServerID: ZfhJYXPEJEzpUKNLHWlD0F
[1] [INF] STREAM: Go version: go1.11.13
[1] [INF] STREAM: Git commit: [910d6e1]
[1] [INF] STREAM: Recovering the state...
[1] [INF] STREAM: No recovered state
[1] [INF] STREAM: Shutting down.
[1] [FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "stan-service"

lanox avatar Jul 23 '20 09:07 lanox

Same issue here.

describing the pods created by the nats-streaming-operator, I see the cli command line args setting the cluster-id as follows:

$ kubectl describe -n mynamespace stan-cluster-2

Name:         stan-cluster-2
Containers:
  stan:
    Image:         nats-streaming:0.18.0
    Command:
      /nats-streaming-server
      -cluster_id
      stan-cluster
      -nats_server
      nats://nats-cluster:4222
      -m
      8222
      -store
      file
      -dir
      store
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1

pod 1 runs ok. Then pod 2 and 3 try to run with the same cluster-id and fails because it's already in use (by pod 1).

What is the correct way the nats-streaming-operator should assign the cluster-id's to the cluster servers? is it some config I'm missing here?

PS.: I'm not mounting any volume yet on the pod spec.

hbobenicio avatar Jul 23 '20 19:07 hbobenicio

maybe this line can be a clue what's happening: https://github.com/nats-io/nats-streaming-operator/blob/079120fc31b6c10d041c4f594d9d4bd9d78ededa/internal/operator/controller.go#L379

isn't it supposed to be pod.Name or something?

hbobenicio avatar Jul 23 '20 19:07 hbobenicio

I downloaded the code, changed o.Name for pod.Name and then I've put some logs to compare both values. I docker built the image and redeploy the operator in my minikube... this is what follows:

$ kubectl logs -n poc nats-streaming-operator-5d4777f476-2wf7n

time="2020-07-23T20:25:22Z" level=info msg="cluster name: stan-cluster" # this is the o.Name
time="2020-07-23T20:25:22Z" level=info msg="pod name: stan-cluster-2" # this is the pod.Name

now the cluster id is correctly set for the pods:

$ kubectl logs -n poc stan-cluster-2 # stan-cluster-2 is the correct cluster-id!

[1] 2020/07/23 20:27:22.126726 [INF] STREAM: Starting nats-streaming-server[stan-cluster-2] version 0.18.0 

and all servers are ready.

hbobenicio avatar Jul 23 '20 20:07 hbobenicio

@hbobenicio that is what I have done and it seem to work although I am not sure how to validate that all 3 nodes are functioning correctly.

I can see 3 nodes been connected but that is about is

Is there a way to check which nodes is receiving?

lanox avatar Jul 23 '20 21:07 lanox

@lanox there are some ways to test it... a quick test would be running nats-box on the cluster and sending/receiving some messages, or maybe writing some test app and running it on your cluster. try checking logs and trying some caos testing at last.

Good to know that it worked for you too

hbobenicio avatar Jul 24 '20 00:07 hbobenicio

My bad... I mixed up the concept of cluster_id (o.Name is actually correct) with cluster_node_id. The bug is somewhere else bellow here:

https://github.com/nats-io/nats-streaming-operator/blob/079120fc31b6c10d041c4f594d9d4bd9d78ededa/internal/operator/controller.go#L399 https://github.com/nats-io/nats-streaming-operator/blob/079120fc31b6c10d041c4f594d9d4bd9d78ededa/internal/operator/controller.go#L402 https://github.com/nats-io/nats-streaming-operator/blob/079120fc31b6c10d041c4f594d9d4bd9d78ededa/internal/operator/controller.go#L404

my yaml describing NatsStreamingCluster doesn't have a config entry, so isClustered fails and it doesn't get the cluster_node_id set.

hbobenicio avatar Jul 24 '20 01:07 hbobenicio

Thanks for looking into this. I think it looked like it was working because it was working as an individual node and not clustered nodes? , hence I was saying I am not sure if this worked as it supposes to, however, I could be wrong in what I am saying.

lanox avatar Jul 24 '20 01:07 lanox

@hbobenicio so this is what fixed the problem for me, I added this ft_group_name:"production-cluster" in my config section and that told the streeaming-operator that it is running in fault tolerance mode and that one singe node will be active while other 2 are in the standby mode.

This is what I did to test.

 kubectl get pods
NAME                                       READY   STATUS    RESTARTS   AGE
nats-operator-58644766bf-hpx9p             1/1     Running   1          24h
nats-service-1                             1/1     Running   0          15m
nats-service-2                             1/1     Running   0          15m
nats-service-3                             1/1     Running   0          15m
nats-streaming-operator-56d59c9846-l6qlm   1/1     Running   0          52m
stan-service-1                             1/1     Running   1          15m
stan-service-2                             1/1     Running   0          15m
stan-service-3                             1/1     Running   0          15m

then

kubectl logs stan-service-1 -c stan
[1] [INF] STREAM: Starting nats-streaming-server[stan-service] version 0.16.2
[1] [INF] STREAM: ServerID: ZwZuUeXKPK3Y7OjI7R1hLd
[1] [INF] STREAM: Go version: go1.11.13
[1] [INF] STREAM: Git commit: [910d6e1]
[1] [INF] STREAM: Starting in standby mode
[1] [INF] STREAM: Server is active
[1] [INF] STREAM: Recovering the state...
[1] [INF] STREAM: No recovered state
[1] [INF] STREAM: Message store is FILE
[1] [INF] STREAM: Store location: store
[1] [INF] STREAM: ---------- Store Limits ----------
[1] [INF] STREAM: Channels:            unlimited
[1] [INF] STREAM: --------- Channels Limits --------
[1] [INF] STREAM:   Subscriptions:     unlimited
[1] [INF] STREAM:   Messages     :     unlimited
[1] [INF] STREAM:   Bytes        :     unlimited
[1] [INF] STREAM:   Age          :        1h0m0s
[1] [INF] STREAM:   Inactivity   :     unlimited *
[1] [INF] STREAM: ----------------------------------
[1] [INF] STREAM: Streaming Server is ready

Then I deleted stan-service-1

Then I checked which other nodes have become master

 kubectl logs stan-service-2 -c stan
[1] [INF] STREAM: Starting nats-streaming-server[stan-service] version 0.16.2
[1] [INF] STREAM: ServerID: BlQLxnAFPv7yf7uaWdXsa9
[1] [INF] STREAM: Go version: go1.11.13
[1] [INF] STREAM: Git commit: [910d6e1]
[1] [INF] STREAM: Starting in standby mode
kubectl logs stan-service-3 -c stan
[1] [INF] STREAM: Starting nats-streaming-server[stan-service] version 0.16.2
[1] [INF] STREAM: ServerID: B3niCweLpvzSewgx3mUsJ9
[1] [INF] STREAM: Go version: go1.11.13
[1] [INF] STREAM: Git commit: [910d6e1]
[1] [INF] STREAM: Starting in standby mode
[1] [INF] STREAM: Server is active
[1] [INF] STREAM: Recovering the state...
[1] [INF] STREAM: No recovered state
[1] [INF] STREAM: Message store is FILE
[1] [INF] STREAM: Store location: store
[1] [INF] STREAM: ---------- Store Limits ----------
[1] [INF] STREAM: Channels:            unlimited
[1] [INF] STREAM: --------- Channels Limits --------
[1] [INF] STREAM:   Subscriptions:     unlimited
[1] [INF] STREAM:   Messages     :     unlimited
[1] [INF] STREAM:   Bytes        :     unlimited
[1] [INF] STREAM:   Age          :        1h0m0s
[1] [INF] STREAM:   Inactivity   :     unlimited *
[1] [INF] STREAM: ----------------------------------
[1] [INF] STREAM: Streaming Server is ready

and stan-service-1 is showing standby.

I think the documentation needs to be updated as well as the example deployments.

lanox avatar Jul 24 '20 05:07 lanox

My bad... I mixed up the concept of cluster_id (o.Name is actually correct) with cluster_node_id. The bug is somewhere else bellow here:

https://github.com/nats-io/nats-streaming-operator/blob/079120fc31b6c10d041c4f594d9d4bd9d78ededa/internal/operator/controller.go#L404

my yaml describing NatsStreamingCluster doesn't have a config entry, so isClustered fails and it doesn't get the cluster_node_id set.

oh and it seems you can only run it in cluster mode or FT mode, but can't be together.

lanox avatar Jul 24 '20 05:07 lanox

Yeah, they are mutually exclusive modes. My use case is for cluster mode.

I think those checks for modes could be improved or, if the config object for the spec is really necessary, then a validation to check if this is missing would be a better error report. But I still think the best approach is that it works even without the config entry.

So, until de fix is made, this is the workaround:

If you have a yaml without a config entry like this, just put an empty config entry then:

apiVersion: "streaming.nats.io/v1alpha1"
kind: "NatsStreamingCluster"
metadata:
  name: "my-stan-cluster"
  namespace: ${NAMESPACE}
spec:
  size: ${CLUSTER_SIZE}
  image: "nats-streaming:0.18.0"
  natsSvc: ${NATS_CLUSTER_NAME}

  # Here... without a config entry, isClustered is false even with spec.Size > 1.
  # Just put an empty config
  config: {}

hbobenicio avatar Jul 24 '20 11:07 hbobenicio

@hbobenicio @wallyqs Hello. I'am getting same error using STAN helm chart

[1] 2020/09/11 15:51:26.922551 [INF] STREAM: Starting nats-streaming-server[stan] version 0.18.0
[1] 2020/09/11 15:51:26.922673 [INF] STREAM: ServerID: PWeRnm2bTpcMaHZatM8MdC
[1] 2020/09/11 15:51:26.922678 [INF] STREAM: Go version: go1.14.4
[1] 2020/09/11 15:51:26.922681 [INF] STREAM: Git commit: [026e3a6]
[1] 2020/09/11 15:51:26.951206 [INF] STREAM: Recovering the state...
[1] 2020/09/11 15:51:26.953525 [INF] STREAM: Recovered 0 channel(s)
[1] 2020/09/11 15:51:26.961610 [INF] STREAM: Shutting down.
[1] 2020/09/11 15:51:26.962248 [FTL] STREAM: Failed to start: discovered another streaming server with cluster ID "stan"
stan:
  replicas: 3

  nats:
    url: nats://nats.nats:4222

  store:
    ...:

  cluster:
    enabled: true

  sql:
    ...:

sergeyshaykhullin avatar Sep 11 '20 15:09 sergeyshaykhullin

Hi @sergeyshaykhullin I think this is an error from the helm charts? Btw I think the error is that it missing defining ft: https://github.com/nats-io/k8s/tree/master/helm/charts/stan#fault-tolerance-mode

  ft:
    group: "stan"

wallyqs avatar Sep 11 '20 16:09 wallyqs