ceph-salt
ceph-salt copied to clipboard
Deploy Ceph clusters using cephadm
The `/cephadm_bootstrap/mon_ip` option is only used for initial deployment, and after that is no longer needed. But I'm getting an error when trying to execute `ceph-salt apply` without `/cephadm_bootstrap/mon_ip`: ```...
- `ceph-salt update` - `ceph-salt reboot` - `ceph-salt stop` - `ceph-salt purge` - ... Relates to: https://github.com/ceph/ceph-salt/issues/279
When running `ceph-salt update`, I do see that the update process is ongoing and eventually finishes: ![Screenshot from 2020-09-09 13-31-18](https://user-images.githubusercontent.com/263427/92593989-9d369080-f291-11ea-97e0-e815a8c10690.png) However, I do not see a summary of what packages...
After `ceph-salt apply` is executed, we should print the dashboard URL: ``` # ceph mgr services ```
Do not have registries_conf enabled as long as the customer did not specify any configuration for that
ceph-salt should automatically remove all minion roles when a minion is removed from `/ceph_cluster/minions`.
Currently `ceph-salt status` checks that the configuration entered by the user is internally coherent, and `ceph-salt apply` checks the actual nodes for problematic states that are known to make `ceph-salt...
`ceph-salt apply` is known to fail in odd ways when running in an environment with poor network connectivity. These failures can be especially vexxing if the network connections are flakey...
How to reproduce ``` master:~ # salt -G 'ceph-salt:member' cmd.run 'systemctl stop ceph.target' node3.pacific.test: node2.pacific.test: master.pacific.test: node1.pacific.test: master:~ # ceph-salt apply Syncing minions with the master... ``` (This happens because...
Currently, all minions with the 'admin' role are also required to have the 'cephadm' role. This means you can't have a node which has the ceph.conf file and admin keyring,...