mongodb-container
mongodb-container copied to clipboard
First replica set member is not adding to existing replica set after failure
Because of the hard coded memberid (0), after an error of the first replica set, the POD cannot add itself to an existing replica set of the other replicas (1+). The first POD then initializes a new replica set.
# Initialize replica set only if we're the first member
if [ "${MEMBER_ID}" = '0' ]; then
initiate "${MEMBER_HOST}"
else
add_member "${MEMBER_HOST}"
fi
The error can be simulated if the first POD and the corresponding PVC is deleted. It would be better to determine beforehand whether a replica set already exists.
The pull request #305 solves the problem.
The error can be simulated if the first POD and the corresponding PVC is deleted.
@lehmeyer Understand that deleting PVC will cause the issue. But did you have the problem with restarting primary in production usage? (without manually deleting a volume)
Kubernetes should remount the volume to restarted pod, so mongod in container should successfully reconnect to replicaset.
The error can be simulated if the first POD and the corresponding PVC is deleted.
@lehmeyer Understand that deleting PVC will cause the issue. But did you have the problem with restarting primary in production usage? (without manually deleting a volume)
Kubernetes should remount the volume to restarted pod, so mongod in container should successfully reconnect to replicaset.
The problem exists if the data of the first replicaset (memberid = 0) no longer exists. Then a splitbrain situation arises with two independent replicasets that cannot be connected.
The problem exists if the data of the first replicaset (memberid = 0) no longer exists.
Agree that it might be an issue - although I don't know how this situation can happen.
But I think it's quite edge case - and it might be a bug in Kubernetes. If it's worth fixing, then #305 looks good.
mongodb container is not maintained any more in this org. closing.