vultr-cli
vultr-cli copied to clipboard
[BUG] - Detaching VPC2 does not work, api issue?
Describe the bug Detaching VPC2 does not work, even though the commands was successful (might be an api issue):
I tried several ways:
- updating instance with curl with param detach_vpc2
- vultr cli vultr-cli instance vpc2 detach ...
To Reproduce Steps to reproduce the behavior:
- list vpc2
$ vultr-cli vpc2 list
ID DATE CREATED REGION DESCRIPTION IP BLOCK PREFIX LENGTH
2b895409-fe58-4787-9499-2b1688aa0f3b 2024-03-10T06:03:37-04:00 ewr test-vpc 10.99.0.0 24
bab5bef9-aac3-45bd-b032-a6a69e51f184 2024-03-10T06:03:50-04:00 ewr test2-vpc 10.98.0.0 24
- create an instance with vpc2 attached:
$ curl "https://api.vultr.com/v2/instances" \
-X POST \
-H "Authorization: Bearer ${VULTR_API_KEY}" \
-H "Content-Type: application/json" \
--data '{
"region" : "ewr",
"plan" : "vc2-1c-1gb",
"label" : "Example Instance",
"os_id" : 2136,
"attach_vcp2": [
"2b895409-fe58-4787-9499-2b1688aa0f3b",
"bab5bef9-aac3-45bd-b032-a6a69e51f184"
]
}'
check
$ vultr-cli instance vpc2 list d2cdeef9-ed7f-4d15-9da3-9284c2aa7345
ID MAC ADDRESS IP ADDRESS
2b895409-fe58-4787-9499-2b1688aa0f3b 5a:01:04:ce:07:ab 10.99.0.3
bab5bef9-aac3-45bd-b032-a6a69e51f184 5a:02:04:ce:07:ab 10.98.0.3
- detach vpc2
$ vultr-cli instance vpc2 detach d2cdeef9-ed7f-4d15-9da3-9284c2aa7345 2b895409-fe58-4787-9499-2b1688aa0f3b
MESSAGE
VPC2 detached from instance
- check vpc2s for instance, still 2 vpc2 attached...
$ vultr-cli instance vpc2 list d2cdeef9-ed7f-4d15-9da3-9284c2aa7345
ID MAC ADDRESS IP ADDRESS
2b895409-fe58-4787-9499-2b1688aa0f3b 5a:01:04:ce:07:ab 10.99.0.3
bab5bef9-aac3-45bd-b032-a6a69e51f184 5a:02:04:ce:07:ab 10.98.0.3
- Try to detach via instance update
$ curl "https://api.vultr.com/v2/instances/d2cdeef9-ed7f-4d15-9da3-9284c2aa7345" \
-X PATCH \
-H "Authorization: Bearer ${VULTR_API_KEY}" \
-H "Content-Type: application/json" \
--data '{
"detach_vpc2": [
"2b895409-fe58-4787-9499-2b1688aa0f3b"
]
}'
{"instance":{"id":"d2cdeef9-ed7f-4d15-9da3-9284c2aa7345","os":"Debian 12 x64","ram":1024,"disk":0,"main_ip":"144.202.9.203","vcpu_count":1,"region":"ewr","plan":"vc2-1c-1gb","date_created":"2024-03-10T06:17:08-04:00","status":"pending","allowed_bandwidth":1,"netmask_v4":"255.255.254.0","gateway_v4":"144.202.8.1","power_status":"running","server_status":"none","v6_network":"","v6_main_ip":"","v6_network_size":0,"label":"Example Instance","internal_ip":"","kvm":"","hostname":"vultr.guest","tag":"","tags":[],"os_id":2136,"app_id":0,"image_id":"","firewall_group_id":"","features":[],"user_scheme":"root"}}%
- check vpc2s for instance again, still 2 vpc2 attached...
$ vultr-cli instance vpc2 list d2cdeef9-ed7f-4d15-9da3-9284c2aa7345
ID MAC ADDRESS IP ADDRESS
2b895409-fe58-4787-9499-2b1688aa0f3b 5a:01:04:ce:07:ab 10.99.0.3
bab5bef9-aac3-45bd-b032-a6a69e51f184 5a:02:04:ce:07:ab 10.98.0.3
Expected behavior
vpc2 detached
Additional context
also seeing this issue in ansible vultr collection https://github.com/vultr/ansible-collection-vultr/pull/118
$ vultr-cli version
Vultr-CLI v3.0.1
I've been noticing some weird, unexpected behavior when testing this, but I think it's because of the platform not being very responsive. Even when things VPC2 detach mechanism isn't instantaneous which is why we have to use retries in the terraform provider to check and attempt the delete/detach.
All that said, I was able to create via cURL, then detach using the examples that you provided. But, like I said, there was a good 10-15 second delay before it reflected that that node was detached.
@optik-aper I see, thanks for this information. A couple questions though: is this just a VPC2 related thing or would it make sense to implement a generic "retry to see vpc changed" function for VPC1 as well? I'm assuming VPC1 will be deprecated in the future, do you have a timeline for this yet?
The retry is specific to VPC2 due to how the platform detects changes and applies them. VPC1 doesn't have this same mechanism so should be effective immediately. I don't know if/when VPC1 will be deprecated but do know that it's currently being used by VKE so I don't think it's going anywhere anytime soon.
I did some more tests and might have found the issue I am experiencing:
Reproducer
- you have two or more VPC2
$ vultr-cli vpc2 list
ID DATE CREATED REGION DESCRIPTION IP BLOCK PREFIX LENGTH
5df03f12-fc11-45ca-a0c2-ac6b08aa543e 2024-03-12T11:24:02-04:00 ams ansible-test-82586263-diode_instance_vpc2_1 192.168.22.0 24
664da4eb-8857-4b5a-bc1f-02eb04ddb716 2024-03-12T11:24:03-04:00 ams ansible-test-82586263-diode_instance_vpc2_2 192.168.99.0 24
- one of them is attach to the VM, e.g. vpc2_1
$ vultr-cli instance vpc2 list 8bfb02eb-75d8-436d-a30f-1a9af088728d
ID MAC ADDRESS IP ADDRESS
5df03f12-fc11-45ca-a0c2-ac6b08aa543e 5a:01:04:ce:df:3d 192.168.22.3
- Update the VM with both VPC2 to be attached
NOTE: the order of the IDs seems relevant!
The first list item is the ID of the VPC2 already exists!!
curl "https://api.vultr.com/v2/instances/8bfb02eb-75d8-436d-a30f-1a9af088728d" \
-X PATCH \
-H "Authorization: Bearer ${VULTR_API_KEY}" \
-H "Content-Type: application/json" \
--data '{
"attach_vpc2": [
"5df03f12-fc11-45ca-a0c2-ac6b08aa543e"
"664da4eb-8857-4b5a-bc1f-02eb04ddb716",
]
}'
--> VPC2 with ID 664da4eb-8857-4b5a-bc1f-02eb04ddb716 never gets attached.
if we change the order:
curl "https://api.vultr.com/v2/instances/8bfb02eb-75d8-436d-a30f-1a9af088728d" \
-X PATCH \
-H "Authorization: Bearer ${VULTR_API_KEY}" \
-H "Content-Type: application/json" \
--data '{
"attach_vpc2": [
"664da4eb-8857-4b5a-bc1f-02eb04ddb716",
"5df03f12-fc11-45ca-a0c2-ac6b08aa543e"
]
}'
--> the VPC2 gets attached, but the VPC2 gets node_status=failed
:
$ curl "https://api.vultr.com/v2/instances/8bfb02eb-75d8-436d-a30f-1a9af088728d/vpc2" \
-X GET \
-H "Authorization: Bearer ${VULTR_API_KEY}" \
-H "Content-Type: application/json"
{"vpcs":[{"id":"5df03f12-fc11-45ca-a0c2-ac6b08aa543e","mac_address":"5a:01:04:ce:df:3d","ip_address":"192.168.22.3","node_status":"active"},{"id":"664da4eb-8857-4b5a-bc1f-02eb04ddb716","mac_address":"5a:02:04:ce:df:3d","ip_address":"192.168.99.3","node_status":"failed"}],"meta":{"total":2,"links":{"next":"","prev":""}}}%
- detach failed and re-attach with single item list
$ curl "https://api.vultr.com/v2/instances/8bfb02eb-75d8-436d-a30f-1a9af088728d" \
-X PATCH \
-H "Authorization: Bearer ${VULTR_API_KEY}" \
-H "Content-Type: application/json" \
--data '{
"attach_vpc2": [
"664da4eb-8857-4b5a-bc1f-02eb04ddb716"
]
}'
--> works
$ curl "https://api.vultr.com/v2/instances/8bfb02eb-75d8-436d-a30f-1a9af088728d/vpc2" \
-X GET \
-H "Authorization: Bearer ${VULTR_API_KEY}" \
-H "Content-Type: application/json"
{"vpcs":[{"id":"5df03f12-fc11-45ca-a0c2-ac6b08aa543e","mac_address":"5a:01:04:ce:df:3d","ip_address":"192.168.22.3","node_status":"active"},{"id":"664da4eb-8857-4b5a-bc1f-02eb04ddb716","mac_address":"5a:02:04:ce:df:3d","ip_address":"192.168.99.3","node_status":"active"}],"meta":{"total":2,"links":{"next":"","prev":""}}}%
For detaching it seems like it is the same.
Oh, interesting. Let me test this out and see what it's doing. Thanks for the additional details!
@optik-aper Hi Micheal, were you able to reproduce?