Joshua Schmid

Results 75 comments of Joshua Schmid

This is due to SLES/openSUSE creating `salt:salt` for the `salt-master`

A relevant failure ( from the minion logs ) ``` 2018-07-27 14:30:10,067 [salt.loaded.ext.module.helper:64 ][DEBUG ][41364] stderr of PYTHONWARNINGS=ignore ceph-disk -v prepare --bluestore --data-dev --journal-dev --cluster ceph --cluste r-uuid 1efc1033-0565-4586-9f79-36906a01bbb0 --block.db...

> I have not tried all 20 profiles in qa/osd-config/ovh, but so far I've run around 5-10 of them at random and they all exhibit this same failure. Now, we...

That's even better @smithfarm :)

@akumacxd You might try the `limit` key like you did in the last example. ``` yaml drive_group_hdd_nvme: target: 'I@roles:storage' data_devices: rotational: 1 db_devices: rotational: 0 limit: 1 block_db_size: '2G' ```...

DeepSea will be put to maintenance mode but there will be a migration path from DS->cephadm.

> Do you mean [this method](https://docs.ceph.com/docs/master/cephadm/adoption/) of converting cluster or there will be a new one? That's right.

It seems that `ceph.updates.salt` triggers `service.restart` on `salt-minion` which correctly triggers a restart of the salt-minion. Other than in the version before, this causes issues in the transport. ``` 2018-11-07...