glusterd2
glusterd2 copied to clipboard
Failed to add devices in glusterd2
Observed behavior
Failed to add devices in glusterd2 containers
Expected/desired behavior
Device adding should be successful
Details on how to reproduce (minimal and precise)
- deploy the kubernetes cluster
- deploy glusterd2 statefulsets
- deploy ETCD cluster
- create services
- Add devices to glusterd2
Information about the environment:
- Glusterd2 version used (e.g. v4.1.0 or master): v4.1.0-202.gitea0c111
- Operating system used:
- Glusterd2 compiled from sources, as a package (rpm/deb), or container: container
- Using External ETCD: (yes/no, if yes ETCD version): yes
- If container, which container image:
docker.io/gluster/glusterd2-nightly:20180920
- Using kubernetes, openshift, or direct install:
kubernetes
- If kubernetes/openshift, is gluster running inside kubernetes/openshift or outside:
inside
Other useful information
- glusterd2 config files from all nodes (default /etc/glusterd2/glusterd2.toml)
- glusterd2 log files from all nodes (default /var/log/glusterd2/glusterd2.log)
- ETCD configuration
- Contents of
uuid.toml
from all nodes (default /var/lib/glusterd2/uuid.toml) - Output of
statedump
from any one of the node
Useful commands
logs from glusterd2
glustercli device add 622d2662-8c86-4d91-9168-4226510555c0 /dev/vdb --endpoints=http://gluster-kube3-0.glusterd2.gcs:24007 -v
ERRO[2018-10-17 06:01:37.322553] device add failed device=/dev/vdb error="etcdserver: requested lease not found" peerid=622d2662-8c86-4d91-9168-4226510555c0
Device add failed
Response headers:
X-Gluster-Peer-Id: 818b25ba-f8dd-4e53-8773-11aabed44284
X-Request-Id: 34498022-1853-4ae8-a55b-807e964715d2
X-Gluster-Cluster-Id: dba3eb85-2f7a-49f5-bb0a-cd2c767a8373
Response body:
etcdserver: requested lease not found
time="2018-10-17 05:55:29.289002" level=error msg="failed to obtain lock" error="etcdserver: requested lease not found" lockID=622d2662-8c86-4d91-9168-4226510555c0 reqid=7f87f8e4-637d-4b38-a706-a9112001be55 source="[lock.go:180:transaction.(*Txn).lock]" txnid=69c13e91-30a1-4e3e-8000-7d8250a43c1c
time="2018-10-17 05:55:29.294549" level=info msg="10.233.64.0 - - [17/Oct/2018:05:55:29 +0000] \"POST /v1/devices/622d2662-8c86-4d91-9168-4226510555c0 HTTP/1.1\" 500 74" reqid=7f87f8e4-637d-4b38-a706-a9112001be55
time="2018-10-17 05:55:29.845253" level=error msg="failed to obtain lock" error="etcdserver: requested lease not found" lockID=622d2662-8c86-4d91-9168-4226510555c0 reqid=3b378e02-db21-4638-b92b-6b27443f2bc3 source="[lock.go:180:transaction.(*Txn).lock]" txnid=a40ab6d7-bf13-4c72-a3ca-91b4606cdcd5
time="2018-10-17 05:55:29.850457" level=info msg="10.233.64.0 - - [17/Oct/2018:05:55:29 +0000] \"POST /v1/devices/622d2662-8c86-4d91-9168-4226510555c0 HTTP/1.1\" 500 74" reqid=3b378e02-db21-4638-b92b-6b27443f2bc3
time="2018-10-17 05:55:37.548739" level=info msg="10.233.65.1 - - [17/Oct/2018:05:55:37 +0000] \"GET /ping HTTP/1.1\" 200 0" reqid=0380d931-cd28-470e-b237-fc4fe3909ebd
glustercli peer list --endpoints=http://gluster-kube3-0.glusterd2.gcs:24007
+--------------------------------------+-----------------+-------------------------------------+-------------------------------------+--------+-----+
| ID | NAME | CLIENT ADDRESSES | PEER ADDRESSES | ONLINE | PID |
+--------------------------------------+-----------------+-------------------------------------+-------------------------------------+--------+-----+
| 622d2662-8c86-4d91-9168-4226510555c0 | gluster-kube1-0 | gluster-kube1-0.glusterd2.gcs:24007 | gluster-kube1-0.glusterd2.gcs:24008 | no | |
| 818b25ba-f8dd-4e53-8773-11aabed44284 | gluster-kube3-0 | gluster-kube3-0.glusterd2.gcs:24007 | gluster-kube3-0.glusterd2.gcs:24008 | no | |
| cbced881-f25e-46c1-b175-9a87759a90c5 | gluster-kube2-0 | gluster-kube2-0.glusterd2.gcs:24007 | gluster-kube2-0.glusterd2.gcs:24008 | no | |
+--------------------------------------+-----------------+-------------------------------------+-------------------------------------+--------+-----+
https://github.com/gluster/glusterd2/issues/1090
@Madhu-1 Any updates regarding this issue ?
@vpandey-RH nope. this is not easily reproducible
Are we able to reproduce this with latest deployment?