mongo-k8s-sidecar icon indicating copy to clipboard operation
mongo-k8s-sidecar copied to clipboard

Mongo error because node found twice

Open DatzAtWork opened this issue 6 years ago • 4 comments

jira-mod-0 is found twice, once as jira-mongo-0:27017 and once as jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 . This causes a mongo error.

Here the logs of the sidecar on jira-mongo-0: In the beginning, when only one pod is found:

$ kubectl.exe logs jira-mongo-0 mongo-sidecar -f

: [email protected] start /opt/cvallance/mongo-k8s-sidecar
: forever src/index.js

warn:    --minUptime not set. Defaulting to: 1000ms
warn:    --spinSleepTime not set. Your script will exit if it does not stay up for at least 1000ms
Using mongo port: 27017
Starting up mongo-k8s-sidecar
The cluster domain 'kubernetes.local' was successfully verified.
Addresses to add:     [ 'jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017' ]
Addresses to remove:  []
replSetReconfig { _id: 'rs0',
  version: 1,
  protocolVersion: 1,
  members:
   [ { _id: 0,
       host: 'jira-mongo-0:27017',
       arbiterOnly: false,
       buildIndexes: true,
       hidden: false,
       priority: 1,
       tags: {},
       slaveDelay: 0,
       votes: 1 },
     { _id: 1,
       host: 'jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017' } ],
  settings:
   { chainingAllowed: true,
     heartbeatIntervalMillis: 2000,
     heartbeatTimeoutSecs: 10,
     electionTimeoutMillis: 10000,
     catchUpTimeoutMillis: -1,
     catchUpTakeoverDelayMillis: 30000,
     getLastErrorModes: {},
     getLastErrorDefaults: { w: 1, wtimeout: 0 },
     replicaSetId: 5b3c9ff15749b1964879d023 } }
Error in workloop { MongoError: The hosts jira-mongo-0:27017 and jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 all map to this node in new configuration version 2 for replica set rs0
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:333:22)
    at Socket.emit (events.js:180:13)
    at addChunk (_stream_readable.js:269:12)
    at readableAddChunk (_stream_readable.js:256:11)
    at Socket.Readable.push (_stream_readable.js:213:10)
    at TCP.onread (net.js:578:20)
  name: 'MongoError',
  message: 'The hosts jira-mongo-0:27017 and jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 all map to this node in new configuration version 2 for replica set rs0',
  ok: 0,
  errmsg: 'The hosts jira-mongo-0:27017 and jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 all map to this node in new configuration version 2 for replica set rs0',
  code: 103,
  codeName: 'NewReplicaSetConfigurationIncompatible',
  operationTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1530703849 },
  '$clusterTime':
   { clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1530703885 },
     signature: { hash: [Binary], keyId: 0 } } }

Later, when second pod is found, it says:

Addresses to add:     [ 'jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017',
  'jira-mongo-1.jira-mongo.fapoms-training.svc.kubernetes.local:27017' ]
Addresses to remove:  []
replSetReconfig { _id: 'rs0',
  version: 1,
  protocolVersion: 1,
  members:
   [ { _id: 0,
       host: 'jira-mongo-0:27017',
       arbiterOnly: false,
       buildIndexes: true,
       hidden: false,
       priority: 1,
       tags: {},
       slaveDelay: 0,
       votes: 1 },
     { _id: 1,
       host: 'jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017' },
     { _id: 2,
       host: 'jira-mongo-1.jira-mongo.fapoms-training.svc.kubernetes.local:27017' } ],
  settings:
   { chainingAllowed: true,
     heartbeatIntervalMillis: 2000,
     heartbeatTimeoutSecs: 10,
     electionTimeoutMillis: 10000,
     catchUpTimeoutMillis: -1,
     catchUpTakeoverDelayMillis: 30000,
     getLastErrorModes: {},
     getLastErrorDefaults: { w: 1, wtimeout: 0 },
     replicaSetId: 5b3c9ff15749b1964879d023 } }
Error in workloop { MongoError: The hosts jira-mongo-0:27017 and jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 all map to this node in new configuration version 2 for replica set rs0
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:333:22)
    at Socket.emit (events.js:180:13)
    at addChunk (_stream_readable.js:269:12)
    at readableAddChunk (_stream_readable.js:256:11)
    at Socket.Readable.push (_stream_readable.js:213:10)
    at TCP.onread (net.js:578:20)
  name: 'MongoError',
  message: 'The hosts jira-mongo-0:27017 and jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 all map to this node in new configuration version 2 for replica set rs0',
  ok: 0,
  errmsg: 'The hosts jira-mongo-0:27017 and jira-mongo-0.jira-mongo.fapoms-training.svc.kubernetes.local:27017 all map to this node in new configuration version 2 for replica set rs0',
  code: 103,
  codeName: 'NewReplicaSetConfigurationIncompatible',
  operationTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1530704647 },
  '$clusterTime':
   { clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1530704647 },
     signature: { hash: [Binary], keyId: 0 } } }

DatzAtWork avatar Jul 04 '18 11:07 DatzAtWork

Same issue, when KUBERNETES_MONGO_SERVICE_NAME is not set:

Addresses to add:     [ '10.6.26.135:27017', '10.6.26.136:27017' ]
Addresses to remove:  []
replSetReconfig { _id: 'rs0',
  version: 1,
  protocolVersion: 1,
  members:
   [ { _id: 0,
       host: 'jira-mongo-0:27017',
       arbiterOnly: false,
       buildIndexes: true,
       hidden: false,
       priority: 1,
       tags: {},
       slaveDelay: 0,
       votes: 1 },
     { _id: 1, host: '10.6.26.135:27017' },
     { _id: 2, host: '10.6.26.136:27017' } ],
  settings:
   { chainingAllowed: true,
     heartbeatIntervalMillis: 2000,
     heartbeatTimeoutSecs: 10,
     electionTimeoutMillis: 10000,
     catchUpTimeoutMillis: -1,
     catchUpTakeoverDelayMillis: 30000,
     getLastErrorModes: {},
     getLastErrorDefaults: { w: 1, wtimeout: 0 },
     replicaSetId: 5b3c9ff15749b1964879d023 } }
Error in workloop { MongoError: The hosts jira-mongo-0:27017 and 10.6.26.135:27017 all map to this node in new configuration version 2 for replica set rs0
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:333:22)
    at Socket.emit (events.js:180:13)
    at addChunk (_stream_readable.js:269:12)
    at readableAddChunk (_stream_readable.js:256:11)
    at Socket.Readable.push (_stream_readable.js:213:10)
    at TCP.onread (net.js:578:20)
  name: 'MongoError',
  message: 'The hosts jira-mongo-0:27017 and 10.6.26.135:27017 all map to this node in new configuration version 2 for replica set rs0',
  ok: 0,
  errmsg: 'The hosts jira-mongo-0:27017 and 10.6.26.135:27017 all map to this node in new configuration version 2 for replica set rs0',
  code: 103,
  codeName: 'NewReplicaSetConfigurationIncompatible',
  operationTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1531122214 },
  '$clusterTime':
   { clusterTime: Timestamp { _bsontype: 'Timestamp', low_: 1, high_: 1531122214 },
     signature: { hash: [Binary], keyId: 0 } } }

DatzAtWork avatar Jul 09 '18 07:07 DatzAtWork

same with me, Is there any solution for this. I think the problem is here, https://github.com/cvallance/mongo-k8s-sidecar/blob/770b1cd4f772aa105c6303398478bc1e52f27e80/src/lib/mongo.js#L80

neeraj9194 avatar Oct 29 '18 07:10 neeraj9194

Turns out it was my mistake all along, I was using PersistentvolumeClaimTemplate and Persistentvolume which were not getting deleted whenever Statefulset gets replaced and was picking replicaset "rs0" from earlier version with wrong hostname. So, If you don't see initial setup logs in logs like below it might be picking old configuration.

The cluster domain 'cluster.local' was successfully verified.
Pod has been elected for replica set initialization
initReplSet 10.16.0.109:27017
initial rsConfig is { _id: 'rs0',
  version: 1,
  protocolVersion: 1,
......

neeraj9194 avatar Oct 30 '18 10:10 neeraj9194

I can reproduce the issue on a new minikube installation. I have to add a clusterolebinding to avoid permission issues in the sidecars;

kubectl create clusterrolebinding default-admin --clusterrole cluster-admin --servicaccount=default:default

Alternatively: https://github.com/cvallance/mongo-k8s-sidecar/pull/86 would fix this However after adding this, and following the instructions I get 3 mongo pods with sidecar, and the sidecars all throw errors alike:

        host: 'mongo-0:27017',
       arbiterOnly: false,
       buildIndexes: true,
       hidden: false,
       priority: 1,
       tags: {},
       slaveDelay: 0,
       votes: 1 },
     { _id: 1, host: '172.17.0.12:27017' },
     { _id: 2, host: '172.17.0.13:27017' },
     { _id: 3, host: '172.17.0.14:27017' } ],
  settings:
   { chainingAllowed: true,
     heartbeatIntervalMillis: 2000,
     heartbeatTimeoutSecs: 10,
     electionTimeoutMillis: 10000,
     catchUpTimeoutMillis: 60000,
     getLastErrorModes: {},
     getLastErrorDefaults: { w: 1, wtimeout: 0 },
     replicaSetId: 5c02af9574c38afd25d6604f } }
Error in workloop { MongoError: The hosts mongo-0:27017 and 172.17.0.12:27017 all map to this node in new configuration version 2 for replica set rs0
    at Function.MongoError.create (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/error.js:31:11)
    at /opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:497:72
    at authenticateStragglers (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:443:16)
    at Connection.messageHandler (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:477:5)
    at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:333:22)
    at Socket.emit (events.js:182:13)
    at addChunk (_stream_readable.js:287:12)
    at readableAddChunk (_stream_readable.js:268:11)
    at Socket.Readable.push (_stream_readable.js:223:10)
    at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:122:17)
  name: 'MongoError',
  message:
   'The hosts mongo-0:27017 and 172.17.0.12:27017 all map to this node in new configuration version 2 for replica set rs0',
  ok: 0,
  errmsg:
   'The hosts mongo-0:27017 and 172.17.0.12:27017 all map to this node in new configuration version 2 for replica set rs0',
  code: 103,
  codeName: 'NewReplicaSetConfigurationIncompatible' } 

The issue seems to arise as mongo-0 does not identify itself by IP when found via the kubernetes client, and is described by the README:

... make sure that:

    the names of the mongo nodes are their IPs
    the names of the mongo nodes are their stable network IDs (for more info see the link above)

In the above this is not the case.

Skeen avatar Dec 01 '18 20:12 Skeen