ovirt-ansible-collection icon indicating copy to clipboard operation
ovirt-ansible-collection copied to clipboard

The error was: 'str object' has no attribute 'vgname'. 'str object' has no attribute 'vgname'

Open charnet1019 opened this issue 1 year ago • 0 comments

env:

ovirt version: 4.5.4

Install a super fusion cluster. After installing the three node nodes, configure the super fusion through the web page, and the following error is reported, each node has one SSD and one HDD, and the ovirt node is installed on the SSD:

TASK [gluster.infra/roles/backend_setup : Group devices by volume group name, including existing devices] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml:3
fatal: [node210.com]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'str object' has no attribute 'vgname'. 'str object' has no attribute 'vgname'\n\nThe error appears to be in '/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Group devices by volume group name, including existing devices\n  ^ here\n"}
fatal: [node211.com]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'str object' has no attribute 'vgname'. 'str object' has no attribute 'vgname'\n\nThe error appears to be in '/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Group devices by volume group name, including existing devices\n  ^ here\n"}
fatal: [node212.com]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'str object' has no attribute 'vgname'. 'str object' has no attribute 'vgname'\n\nThe error appears to be in '/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/get_vg_groupings.yml': line 3, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Group devices by volume group name, including existing devices\n  ^ here\n"}

playbook:

hc_nodes:
  hosts:
    node210.com:
      gluster_infra_volume_groups:
        - vgname: gluster_vg_sda
          pvname: /dev/sda
      gluster_infra_mount_devices:
        - path: /gluster_bricks/engine
          lvname: gluster_lv_engine
          vgname: gluster_vg_sda
        - path: /gluster_bricks/data
          lvname: gluster_lv_data
          vgname: gluster_vg_sda
        - path: /gluster_bricks/vmstore
          lvname: gluster_lv_vmstore
          vgname: gluster_vg_sda
      blacklist_mpath_devices:
        - sda
      gluster_infra_thick_lvs:
        - vgname: gluster_vg_sda
          lvname: gluster_lv_engine
          size: 100G
      gluster_infra_thinpools:
        - vgname: gluster_vg_sda
          thinpoolname: gluster_thinpool_gluster_vg_sda
          poolmetadatasize: 1G
      gluster_infra_lv_logicalvols:
        - vgname: gluster_vg_sda
          thinpool: gluster_thinpool_gluster_vg_sda
          lvname: gluster_lv_data
          lvsize: 150G
        - vgname: gluster_vg_sda
          thinpool: gluster_thinpool_gluster_vg_sda
          lvname: gluster_lv_vmstore
          lvsize: 150G
    node211.com:
      gluster_infra_volume_groups:
        - vgname: gluster_vg_sda
          pvname: /dev/sda
      gluster_infra_mount_devices:
        - path: /gluster_bricks/engine
          lvname: gluster_lv_engine
          vgname: gluster_vg_sda
        - path: /gluster_bricks/data
          lvname: gluster_lv_data
          vgname: gluster_vg_sda
        - path: /gluster_bricks/vmstore
          lvname: gluster_lv_vmstore
          vgname: gluster_vg_sda
      blacklist_mpath_devices:
        - sda
      gluster_infra_thick_lvs:
        - vgname: gluster_vg_sda
          lvname: gluster_lv_engine
          size: 100G
      gluster_infra_thinpools:
        - vgname: gluster_vg_sda
          thinpoolname: gluster_thinpool_gluster_vg_sda
          poolmetadatasize: 1G
      gluster_infra_lv_logicalvols:
        - vgname: gluster_vg_sda
          thinpool: gluster_thinpool_gluster_vg_sda
          lvname: gluster_lv_data
          lvsize: 150G
        - vgname: gluster_vg_sda
          thinpool: gluster_thinpool_gluster_vg_sda
          lvname: gluster_lv_vmstore
          lvsize: 150G
    node212.com:
      gluster_infra_volume_groups:
        - vgname: gluster_vg_sda
          pvname: /dev/sda
      gluster_infra_mount_devices:
        - path: /gluster_bricks/engine
          lvname: gluster_lv_engine
          vgname: gluster_vg_sda
        - path: /gluster_bricks/data
          lvname: gluster_lv_data
          vgname: gluster_vg_sda
        - path: /gluster_bricks/vmstore
          lvname: gluster_lv_vmstore
          vgname: gluster_vg_sda
      blacklist_mpath_devices:
        - sda
      gluster_infra_thick_lvs:
        - vgname: gluster_vg_sda
          lvname: gluster_lv_engine
          size: 100G
      gluster_infra_thinpools:
        - vgname: gluster_vg_sda
          thinpoolname: gluster_thinpool_gluster_vg_sda
          poolmetadatasize: 1G
      gluster_infra_lv_logicalvols:
        - vgname: gluster_vg_sda
          thinpool: gluster_thinpool_gluster_vg_sda
          lvname: gluster_lv_data
          lvsize: 150G
        - vgname: gluster_vg_sda
          thinpool: gluster_thinpool_gluster_vg_sda
          lvname: gluster_lv_vmstore
          lvsize: 150G
  vars:
    gluster_infra_disktype: JBOD
    gluster_set_selinux_labels: true
    gluster_infra_fw_ports:
      - 2049/tcp
      - 54321/tcp
      - 5900/tcp
      - 5900-6923/tcp
      - 5666/tcp
      - 16514/tcp
    gluster_infra_fw_permanent: true
    gluster_infra_fw_state: enabled
    gluster_infra_fw_zone: public
    gluster_infra_fw_services:
      - glusterfs
    gluster_features_force_varlogsizecheck: false
    cluster_nodes:
      - node210.com
      - node211.com
      - node212.com
    gluster_features_hci_cluster: '{{ cluster_nodes }}'
    gluster_features_hci_volumes:
      - volname: engine
        brick: /gluster_bricks/engine/engine
        arbiter: 0
      - volname: data
        brick: /gluster_bricks/data/data
        arbiter: 0
      - volname: vmstore
        brick: /gluster_bricks/vmstore/vmstore
        arbiter: 0

error.log

charnet1019 avatar Mar 21 '23 08:03 charnet1019