ocp4-vsphere-upi-automation
ocp4-vsphere-upi-automation copied to clipboard
OVF deploy timedout
I am running ansible-playbook -vvv -i staging static_ips_ova.yml
and got below error, it happend sometimes but sometimes it works well, not sure why it doesn't work somehow
The full traceback is:
File "/tmp/ansible_vmware_deploy_ovf_payload_4zu7qy9c/ansible_vmware_deploy_ovf_payload.zip/ansible/modules/cloud/vmware/vmware_deploy_ovf.py", line 292, in run
File "/tmp/ansible_vmware_deploy_ovf_payload_4zu7qy9c/ansible_vmware_deploy_ovf_payload.zip/ansible/modules/cloud/vmware/vmware_deploy_ovf.py", line 286, in _open_url
File "/tmp/ansible_vmware_deploy_ovf_payload_4zu7qy9c/ansible_vmware_deploy_ovf_payload.zip/ansible/module_utils/urls.py", line 1390, in open_url
unredirected_headers=unredirected_headers)
File "/tmp/ansible_vmware_deploy_ovf_payload_4zu7qy9c/ansible_vmware_deploy_ovf_payload.zip/ansible/module_utils/urls.py", line 1294, in open
r = urllib_request.urlopen(*urlopen_args)
File "/usr/lib64/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib64/python3.6/urllib/request.py", line 526, in open
response = self._open(req, data)
File "/usr/lib64/python3.6/urllib/request.py", line 544, in _open
'_open', req)
File "/usr/lib64/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/tmp/ansible_vmware_deploy_ovf_payload_4zu7qy9c/ansible_vmware_deploy_ovf_payload.zip/ansible/module_utils/urls.py", line 467, in https_open
return self.do_open(self._build_https_connection, req)
File "/usr/lib64/python3.6/urllib/request.py", line 1351, in do_open
raise URLError(err)
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"allow_duplicates": false,
"cluster": null,
"datacenter": "homelab",
"datastore": "datastore1",
"deployment_option": null,
"disk_provisioning": "thin",
"fail_on_spec_warnings": false,
"folder": "/homelab/vm/ocp4-sddll",
"hostname": "192.168.31.140",
"inject_ovf_env": false,
"name": "rhcos-vmware",
"networks": {
"VM Network": "Internal Network"
},
"ova": "/root/ocp4-vsphere-upi-automation/downloads/rhcos-vmware.ova",
"ovf": "/root/ocp4-vsphere-upi-automation/downloads/rhcos-vmware.ova",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"power_on": false,
"properties": null,
"proxy_host": null,
"proxy_port": null,
"resource_pool": "Resources",
"username": "[email protected]",
"validate_certs": false,
"wait": true,
"wait_for_ip_address": false
}
},
"msg": "<urlopen error The write operation timed out>"
}
Sorry, I found the error because of my network quality, please ignore it
Once again I got the same error, could not deploy ovf, and this time I am sure my network is stable and fast enough, cause I dont even use wifi for connection
Hello, where are you at with this? Honestly, the error you are getting "The write operation timed out" is either going to be your network or storage that is taking too long. The module in use here itself is not having an issue because the error you are getting means the call was made to vCenter to deploy the template but fails to complete before timing out.
While running ansible-playbook -i staging dhcp_ova.yml
I had the same write operation timed out
error using vSphere 7.0. My problem seems not to have anything to do with bandwidth or an unreliable network. It always fails.
However, I was able to manually upload the ova file and create the rhcos-vmware template
in the expected folder and then run the playbook again. This time, the step: Deploy the OVF template into the folder
didn't fail and the deployment continued.
I have run into the same issue, and same results. We can manually upload the ova but the vmware_deploy_ovf module fails with timeout.
The odd thing is this was working before. We deployed a cluster, tore it down removed all VM files and the template file and now it fails on another run?!?
May be related to upstream module issue: https://github.com/ansible-collections/community.vmware/issues/169
This was due to an issue in the Ansible module. It is a stale issue now but resolved in newer versions of Ansible.