community.vmware
community.vmware copied to clipboard
add attach/detach functionality to the vmware_host_datastore module
SUMMARY
The vmware_host_datastore module can be used to mount and unmount datastores from ESXi hosts. But I would also need the functionality to detach the datastore device after unmounting the VMFS filesystem. The same is true for the mount. I need to be able to attach the datastore device which is available but detached from a host before mounting it.
ISSUE TYPE
- Feature Idea
COMPONENT NAME
community.vmware.vmware_host_datastore
ADDITIONAL INFORMATION
Basically what I would like to to is what I have been using the PowerShell Cmdlet to do: Here's a snippet of my PowerShell Code:
foreach ($ESXiHostName in ($ESXiHostNames | Sort-Object)) {
$vmHost = Get-VMHost -Name $ESXiHostName
$hostView = Get-View $vmHost
$storageSys = Get-View $hostView.ConfigManager.StorageSystem
$devices = $storageSys.StorageDeviceInfo.ScsiLun
foreach ($datastoreName in ($dataStoreNames | Sort-Object)) {
foreach ($device in ($devices | Sort-Object)) {
if ($device.DisplayName -eq $datastoreName) {
$LunUUID = $device.Uuid
$state = $device.operationalState[0]
if ($state -eq "off") {
debug "scheduling datastore $datastoreName for attachment to host $ESXiHostName..."
# add necessary information to workElements hash
$key = "${LunUUID}:${vmHostName}"
$workElements.Add($key,$storageSys)
} elseif ($state -eq "ok") {
debug "The datastore $datastoreName is already attached to the host $vmHostName, skipping it..."
} else {
error "The datastore $datastoreName does not have the right operational state to be attached to host $ESXiHostName..."
}
} #end if device name eq datastore name
} # end foreach device
} # end foreach datastore
} # end foreach ESXiHost
if ($workElements.Count -gt 0) {
# now execute the actual work in parallel using the workElements hash
info "now attaching all datastore devices to all hosts with a maximum of parallel actions of ${limit}"
$workElements.GetEnumerator() | ForEach-Object -Parallel { ($_.Value).AttachScsiLun($($_.Key.split(":"))[0]) } -ThrottleLimit $limit
}
This is the playbook snippet I would expect to work to first attach the datastore device and then to mount the VMFS filesystem. There is a lot of LDAP lookups as a lot of information about our systems is stored in our LDAP instance!
---
- name: "mount datastores of a given diskpool on all ESXi hosts of the cluster"
gather_facts: no
hosts: localhost
vars_files:
- ../lib/vcenter-login.yml
vars:
diskpool: datastore_cluster
login: &login
username: "{{ vcenter_user }}"
password: "{{ vcenter_pass }}"
tasks:
# some steps to gather the list of datastores and list of ESXi hosts
- name: mount/unmount datastores
community.vmware.vmware_host_datastore:
datastore_name: "{{ item[1] }}"
datastore_type: vmfs
esxi_hostname: "{{ item[0] | join }}"
vmfs_device_name: "{{ canonical_name }}"
vmfs_version: 6
state: present
hostname: "{{ vcenterhost }}"
<<: *login
loop: "{{ esx_hostnames | datastores }}"
Files identified in the description: None
If these files are inaccurate, please update the component name section of the description or use the !component bot command.
We need this same feature so we can remove RDM LUNS as well. Our current playbook removes the RDM from the VM but are having to rip the storage away from the host as a part of our nightly environment refresh. This causes APD events and would love to be able to properly detach luns.
Any update on this issue? Is there any chance this will be selected for development anytime soon?