DeepSea icon indicating copy to clipboard operation
DeepSea copied to clipboard

A collection of Salt files for deploying, managing and automating Ceph.

Results 101 DeepSea issues
Sort by recently updated
recently updated
newest added

### Description of Issue/Question During discovery phase, proposal. populate fails for just one node. ``` proposal.generate: nodea: The minion function caused an exception: Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/salt/minion.py",...

### Description of Issue/Question Creating an issue for discussion from Martin: 1. proper timesync check during deployment regardless of the configuration at the customer (PoC vs. Production / real world...

### Description of Issue/Question The error message "Mine on $(hostname) for cephdisks.list" is remarkably unhelpful. ### Setup deepsea-0.8.5 on SLES 12 SP3 with salt 2016.11.4 ### Steps to Reproduce Issue...

enhancement

### Description of Issue/Question In the master branch CI, we are seeing frequent, but transient, Stage 3 hangs/timeouts. They do not seem to be associated with any particular test. ```...

priority

4-node cluster like so: ``` roles: - - client.salt_master - mon.a - mgr.x - osd.0 - - mon.b - mgr.y - osd.1 - - mon.c - mgr.z - osd.2 -...

bug
CLI

DeepSea master branch, tip is 12965311c0c6f4ad69e5854f498907df7f9d1cea On a single-node SLE15/SES6 cluster with 4 external drives, I run Stages 0-3, get HEALTH_OK. Then I do `salt-run state.orch ceph.smoketests` and this fails...

enhancement

A user reports: during the execution of stage 0 the “/var/log/deepsea.log” shows several occurrences of errors when calling “/usr/bin/zypper”: ``` 2017-10-05 07:49:38,185 [INFO] deepsea.monitor: Start stage: ceph.stage.0 jid=20171005074921707802 2017-10-05 07:49:38,187...

enhancement

From jewel/SES4 to luminous/SES5 there was a change in the rgw_frontends syntax when specifying multiple rgw ports. In jewel it was > rgw_frontends = "civetweb port=80, civetweb port=443s ssl_certificate=/etc/ceph/rgw.pem" whereas...

enhancement

### Description of Issue/Question This is an odd issue, but 100% reproducible on master. When run via `salt-run state.orch`, the `ceph.functests.1node` orchestration works flawlessly. But when run via CLI, it...

CLI

Running `salt-run state.orch ceph.maintenance.upgrade` errors out _after_ `ceph.updates.salt` with a default error screen. From the logs I can see the following. ``` 2018-11-06 13:54:30,997 [salt.transport.zeromq:393 ][DEBUG ][58454] Setting zmq_reconnect_ivl_max to...

bug