amulet
amulet copied to clipboard
sentry list indexerror when unit exists
We're seeing this periodically with amulet tests of subordinates. It passes sometimes, sometimes it traces.
00:14:50.545 Traceback (most recent call last):
00:14:50.545 File "./tests/020-basic-wily-liberty", line 8, in <module>
00:14:50.545 deployment = LXDBasicDeployment(series='wily')
00:14:50.545 File "/var/lib/jenkins/checkout/lxd/tests/basic_deployment.py", line 47, in __init__
00:14:50.545 self._initialize_tests()
00:14:50.545 File "/var/lib/jenkins/checkout/lxd/tests/basic_deployment.py", line 128, in _initialize_tests
00:14:50.546 self.lxd1_sentry = self.d.sentry['lxd'][1]
00:14:50.546 IndexError: list index out of range
[Services]
NAME STATUS EXPOSED CHARM
glance active false local:wily/glance-150
keystone active false local:wily/keystone-0
lxd false local:wily/lxd-1
mysql unknown false local:wily/mysql-326
nova-cloud-controller active false local:wily/nova-cloud-controller-501
nova-compute active false local:wily/nova-compute-133
rabbitmq-server active false local:wily/rabbitmq-server-150
[Units]
ID WORKLOAD-STATE AGENT-STATE VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
glance/0 active idle 1.25.3 1 9292/tcp 172.17.119.159 Unit is ready
keystone/0 active idle 1.25.3 2 172.17.119.160 Unit is ready
mysql/0 unknown idle 1.25.3 3 3306/tcp 172.17.119.161
nova-cloud-controller/0 active idle 1.25.3 4 8774/tcp 172.17.119.162 Unit is ready
nova-compute/0 active idle 1.25.3 5 172.17.119.163 Unit is ready
lxd/1 active idle 1.25.3 172.17.119.163 Unit is ready
nova-compute/1 active idle 1.25.3 6 172.17.119.164 Unit is ready
lxd/0 active idle 1.25.3 172.17.119.164 Unit is ready
rabbitmq-server/0 active idle 1.25.3 7 5672/tcp 172.17.119.165 Unit is ready
[Machines]
ID STATE VERSION DNS INS-ID SERIES HARDWARE
0 started 1.25.3 172.17.119.158 31eec3fc-2b9b-4857-8c63-755e0cee165b trusty arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova
1 started 1.25.3 172.17.119.159 4c18571a-a3fc-4f7d-aa61-181247f0ebd2 wily arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova
2 started 1.25.3 172.17.119.160 26d334ab-8536-4f88-b455-79039aa37091 wily arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova
3 started 1.25.3 172.17.119.161 e5828c44-f1d0-402e-a118-90838cf4d569 wily arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova
4 started 1.25.3 172.17.119.162 a5fbc4f8-d4d5-4542-a25e-1b393274f5c0 wily arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova
5 started 1.25.3 172.17.119.163 8c6a4d2a-a8f5-4b16-b177-3fe921d89fb7 wily arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova
6 started 1.25.3 172.17.119.164 4410ade1-88fc-4c90-b10b-cc13fc7e6e00 wily arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova
7 started 1.25.3 172.17.119.165 1cb5eb52-b9bc-49a3-a470-fee00d20460b wily arch=amd64 cpu-cores=1 mem=1536M root-disk=10240M availability-zone=nova
Full juju status yaml: juju-stat-yaml-collect.yaml.txt
Reference: https://github.com/openstack/charm-lxd/blob/master/tests/basic_deployment.py#L127
When the principal and subordinate unit numbers match, everything passes and we don't observe this issue.
nova-compute/0 active idle 1.25.3 5 172.17.114.114 Unit is ready
lxd/0 active idle 1.25.3 172.17.114.114 Unit is ready
nova-compute/1 active idle 1.25.3 6 172.17.114.115 Unit is ready
lxd/1 active idle 1.25.3 172.17.114.115 Unit is ready