canonical-kubernetes spell seems to be hung for more than 2 hours
Running the canonical-kubernetes spell using conjure-up from the edge channel has been stuck in "Waiting for deployment to settle" for more than 2 hours. Eventually had to abort. juju status showed
Model Controller Cloud/Region Version SLA
conjure-canonical-kubern-674 conjure-up-aws-05e aws/us-east-1 2.3.3 unsupported
App Version Status Scale Charm Store Rev OS Notes
Unit Workload Agent Machine Public address Ports Message
Machine State DNS Inst id Series AZ Message
0 started 34.203.247.75 i-0e0685169081a789e xenial us-east-1b running
1 started 34.201.125.104 i-0fd9bb88875b710f0 xenial us-east-1d running
2 started 35.171.228.110 i-0667df8884bda0393 xenial us-east-1a running
3 started 34.204.53.188 i-07375c94e2bcc7715 xenial us-east-1c running
4 started 52.91.209.236 i-0f09df185ad8c0809 xenial us-east-1e running
5 started 52.91.151.68 i-082693342173730f4 xenial us-east-1e running
6 started 52.23.166.237 i-0189f3b322af0a57e xenial us-east-1b running
7 started 34.207.201.7 i-037a8cdff6cae175c xenial us-east-1d running
8 started 54.164.4.12 i-0c5ad20f2e0675dd6 xenial us-east-1c running
Versions:
$ which conjure-up
/snap/bin/conjure-up
$ conjure-up --version
conjure-up 2.5.5
$ juju --version
2.3.3-xenial-amd64
I don't suppose you still have the ~/.cache/conjure-up/conjure-up.log file from this deployment? It's very strange to see the machines up and running without any applications or units; it suggests there was an error in conjure-up that perhaps didn't get shown. That, or a failure in the Juju controller, so the /var/log/juju/machine-0.log file from machine 0 on the controller model would be very helpful as well.
You are right that I do not have the logs now but I did look at the conjure-up.log and there was not anything unusual there, but if I do encounter the same situation again, I will include the contents here.Thx