kubespray
kubespray copied to clipboard
streamline ansible_default_ipv4 gathering loop
What type of PR is this? /kind cleanup
What this PR does / why we need it:
If nodes' ansible_default_ipv4 are not defined, this loop gathers it from facts on each node.
However for large clusters this is extremely slow because this loop executes in serial, only one node at a time, because of how it is delegated to just one host and runs a loop.
The PR starts to streamline this by running setup directly instead of including a separate task.
Also add unique filter to avoid gathering the ipv4 address of some nodes multiple times.
Special notes for your reviewer: The only reason that fallback_ips_gather.yml existed as an included task was to work around a mitogen issue. However , mitogen is deprecated in kubespray.
NOTE: My end goal was to make this work with async , poll: 0 and possibly async_status. However those tools to speed up execution in parallel are not available when include_tasks is used:
ERROR! 'async' is not a valid attribute for a TaskInclude
I did not get async working yet (possibly because of a complication with delegate) , but with the include removed this will open the way to using async in the future to make this much faster.
https://devops.stackexchange.com/questions/3860/is-there-a-way-to-run-with-items-loops-in-parallel-in-ansible
Does this PR introduce a user-facing change?: No
NONE
Hi @rptaylor. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: rptaylor
Once this PR has been reviewed and has the lgtm label, please assign floryut for approval by writing /assign @floryut in a comment. For more information see:The Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Before:
TASK [kubespray-defaults : Gather ansible_default_ipv4 from cluster-prod-k8s-node-f54] ***************************************************************************************
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-f54]
Thursday 15 September 2022 23:45:24 +0000 (0:00:02.927) 0:08:35.094 ****
TASK [kubespray-defaults : Gather ansible_default_ipv4 from cluster-prod-k8s-node-f55] ***************************************************************************************
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-f55]
Thursday 15 September 2022 23:45:26 +0000 (0:00:02.117) 0:08:37.211 ****
TASK [kubespray-defaults : Gather ansible_default_ipv4 from cluster-prod-k8s-node-f56] ***************************************************************************************
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-f56]
Thursday 15 September 2022 23:45:33 +0000 (0:00:06.891) 0:08:44.102 ****
TASK [kubespray-defaults : Gather ansible_default_ipv4 from cluster-prod-k8s-node-f57] ***************************************************************************************
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-f57]
Thursday 15 September 2022 23:45:36 +0000 (0:00:02.487) 0:08:46.590 ****
TASK [kubespray-defaults : Gather ansible_default_ipv4 from cluster-prod-k8s-node-f58] ***************************************************************************************
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-f58]
Thursday 15 September 2022 23:45:38 +0000 (0:00:02.544) 0:08:49.135 ****
After:
TASK [kubespray-defaults : Gather ansible_default_ipv4 from all hosts] ******************************************************************************************************
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-master-1] => (item=cluster-prod-k8s-master-1)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-master-2] => (item=cluster-prod-k8s-master-2)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-master-3] => (item=cluster-prod-k8s-master-3)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c01] => (item=cluster-prod-k8s-node-c01)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c02] => (item=cluster-prod-k8s-node-c02)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c03] => (item=cluster-prod-k8s-node-c03)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c04] => (item=cluster-prod-k8s-node-c04)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c05] => (item=cluster-prod-k8s-node-c05)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c06] => (item=cluster-prod-k8s-node-c06)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c07] => (item=cluster-prod-k8s-node-c07)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c08] => (item=cluster-prod-k8s-node-c08)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c09] => (item=cluster-prod-k8s-node-c09)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c10] => (item=cluster-prod-k8s-node-c10)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c11] => (item=cluster-prod-k8s-node-c11)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c12] => (item=cluster-prod-k8s-node-c12)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c13] => (item=cluster-prod-k8s-node-c13)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c14] => (item=cluster-prod-k8s-node-c14)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c15] => (item=cluster-prod-k8s-node-c15)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c16] => (item=cluster-prod-k8s-node-c16)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c17] => (item=cluster-prod-k8s-node-c17)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c18] => (item=cluster-prod-k8s-node-c18)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c19] => (item=cluster-prod-k8s-node-c19)
ok: [cluster-prod-k8s-master-1 -> cluster-prod-k8s-node-c20] => (item=cluster-prod-k8s-node-c20)
And I confirmed that the facts written to the cache look good:
{
"ansible_default_ipv4": {
"address": "10.5.7.71",
"alias": "eth0",
"broadcast": "10.5.7.95",
"gateway": "10.5.7.65",
"interface": "eth0",
"macaddress": "fa:16:3e:41:3b:cb",
"mtu": 1500,
"netmask": "255.255.255.224",
"network": "10.5.7.64",
"type": "ether"
},
"discovered_interpreter_python": "/usr/libexec/platform-python"
}
@rptaylor could you estimate what kind of an improvement we can get with this change? should we put it as a target for 2.20 release or we are introducing a risk for the release and would be better to delay for 2.21?
@cristicalin for now the main improvement is cleanup/simplification and adding unique to avoid running multiple times on the same nodes. This can make it maybe just a couple seconds faster, but opens the potential for more substantial improvements in the future by allowing async , poll: 0 to be used (which is not possible with an include).
I don't see much of a risk, it accomplishes the same thing in my testing. The only change that might have some functional difference is removing connection: "{{ (delegate_host_to_gather_facts == 'localhost') | ternary('local', omit) }}" which I assume was something related to mitogen. I am not sure how it would be applicable in normal situations.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This looks good for me.
/ok-to-test /lgtm
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: floryut, rptaylor
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [floryut]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment