kustomize
kustomize copied to clipboard
Cached git cloner
To improve kustomize build time, I add a new gitcloner function: CachedGitCloner. This function uses a home directory to cache "visited" repositories. In the same build cycle we can reuse an already cloned repository with the same ref. We also can reuse the same directory between builds. User can clean manually his cache
The CachedGitCloner check if the git ref is a tag or a branch, the command fails if no tag are used. We can discuss about and improve this check. The check can be optional, using a build flag, we decide to be strict or flexible with git ref. (this is not implemented, the only strict mode is implemented)
this PR can fix https://github.com/kubernetes-sigs/kustomize/issues/2460
Welcome @maknihamdi!
It looks like this is your first PR to kubernetes-sigs/kustomize 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-sigs/kustomize has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
Hi @maknihamdi. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
thank you @monopole for your review!
It's a good idea to introduce a --enable_alpha_git_cache flag to let people experiment. It's what we do with my team, we started with a very basic implementation, and we are improving the process to have something usable with many use cases.
I think we can introduce an extra flag to accept or not a mutable dependency (branches ref). For example, in our CI/CD we prohibit using branches reference in production environments, but we need it in development env
We can also introduce expiration management, with an automatic cleanup or manual assist, it can be a new kustomize command (or flag) to clean cache directory.
I got inspired by kubectl cache directory to choose the $home/.kustomize (I should document it by the way), but we can change it to use the XDG_CONFIG_HOME, but it's not a plugin, and It's not a conf, I dont know if XDG_CONFIG_HOME is the good place. What do you think?
"Does it make sense to do this?": we really saves a lot of time in development and ci cd jobs since we use this functionality with my team. I'll explain how and why we do it in the issue https://github.com/kubernetes-sigs/kustomize/issues/2460
what about XDG_CACHE_HOME to store the cache? It s maybe more compliant as XDG_CONFIG_HOME
We should schedule time to discuss this at a sig-cli meeting. Would you be willing to add it to an agenda and come talk about it?
Agree XDG_CACHE_HOME should be honored. It should default to $HOME/.cache
Could put that definition near https://github.com/kubernetes-sigs/kustomize/blob/master/api/konfig/general.go#L29
Then kustomize's git cache would be in, say, $XDG_CACHE_HOME/kustomize/git.
We would really like to see this get in as well. We are struggling with this same problem where we version components against a remote base repo and very long build times.
Please rebase and squash so we can see where this is.
The idea to add an allow branch refs flags is good and consistent with flags having no impact on kustomize output.
@monopole in the last two commit I uses XDG_CACHE_HOME to compute cache dir and adds new flag to allow git branch references. may be it's better to use denyGitBranchesRef instead of enableGitBranchesRef as I did (or allowGitBranchesRef). It depends on default behavior we want to have, if by default we deny branches and the flag is to enable it, or the default is to accept all and the flag is to deny it. what do you think?
@maknihamdi: This PR has multiple commits, and the default merge method is: merge. You can request commits to be squashed using the label: tide/merge-method-squash
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Needs rebasing.
@monopole is this likely to be merged in this form? Its required for some work I'm doing, and am trying to figure out if I have alternative options.
if the directory specified by XDG_CACHE_HOME doesn't exist, kustomize build --enable-alpha-git-cache errors out rather than creating the .cache directory. Should it create the directory instead?
Error: accumulating resources: accumulation err='accumulating resources from '[email protected]:<redacted_github_org>/<redacted_repo_path>': evalsymlink failure on '<redacted local repo paths>': mkdir /Users/danielnelson/.cache/git: no such file or directory
(private repo info redacted)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@maknihamdi could you rebase?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
@maknihamdi would you mind doing a rebase, please?
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: maknihamdi
To complete the pull request process, please assign knverey after the PR has been reviewed.
You can assign the PR to them by writing /assign @knverey in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@maknihamdi: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Sadly this needs another rebase.
@monopole how can we progress this?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Please
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@maknihamdi are you still planning on working on this? If so I think it could be worth a KEP, or at least detailed issue so that a suitable strategy could be made.
@george-angel - Are you reopening because you would like to see this work merged?
@george-angel - Are you reopening because you would like to see this work merged?
We would also like to see this work merged, it would go a long way to speeding up our kustomize builds.