microceph icon indicating copy to clipboard operation
microceph copied to clipboard

After snap refresh add disk is failing

Open peppepetra opened this issue 1 year ago • 2 comments

After refreshing snap from version 0+git.78e3dc7 (rev.421) to 0+git.e5c33c3 to (rev.697) adding new disks fail with

root@machine1:~# microceph disk add --encrypt /dev/disk/by-id/wwn-0x5000c500ec97e8bb

Error: Failed adding new disk: Failed to set host failure domain: Failed to run: ceph osd crush rule dump microceph_auto_host: exit status 2 (Error ENOENT: unknown crush rule 'microceph_auto_host')

Ceph versions:

root@machine3:~# microceph.ceph versions
{
    "mon": {
        "ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)": 3
    },
    "osd": {
        "ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)": 12
    },
    "mds": {
        "ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)": 3
    },
    "rgw": {
        "ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)": 3
    },
    "overall": {
        "ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)": 24
    }
}

osd tree:

root@machine3:~# microceph.ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME                 STATUS  REWEIGHT  PRI-AFF
-1         6.54950  root default                                       
-2         2.18317      host machine1                           
 0    hdd  0.54579          osd.0                 up   1.00000  1.00000
 3    hdd  0.54579          osd.3                 up   1.00000  1.00000
 4    hdd  0.54579          osd.4                 up   1.00000  1.00000
 8    hdd  0.54579          osd.8                 up   1.00000  1.00000
-4         2.18317      host machine2                           
 2    hdd  0.54579          osd.2                 up   1.00000  1.00000
 5    hdd  0.54579          osd.5                 up   1.00000  1.00000
 6    hdd  0.54579          osd.6                 up   1.00000  1.00000
 7    hdd  0.54579          osd.7                 up   1.00000  1.00000
-3         2.18317      host machine3                           
 1    hdd  0.54579          osd.1                 up   1.00000  1.00000
 9    hdd  0.54579          osd.9                 up   1.00000  1.00000
10    hdd  0.54579          osd.10                up   1.00000  1.00000
11    hdd  0.54579          osd.11                up   1.00000  1.00000

crush rule available:

root@machine1:~# ceph osd crush rule ls
replicated_rule

root@machine1:~# ceph osd crush rule dump replicated_rule
{
    "rule_id": 0,
    "rule_name": "replicated_rule",
    "type": 1,
    "steps": [
        {
            "op": "take",
            "item": -1,
            "item_name": "default"
        },
        {
            "op": "chooseleaf_firstn",
            "num": 0,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}

peppepetra avatar Oct 06 '23 14:10 peppepetra