descheduler
descheduler copied to clipboard
Hack scripts assume "sigs.k8s.io/descheduler" project root unless PRJ_PREFIX is set
When running hack scripts (or make gen
, which calls them) to generate conversions, etc, outside a project root that matches what is set in hack/lib/init.sh
the scripts will silently fail, generating no output, with no indication why. This can be easily fixed by setting PRJ_PREFIX
, but this is not documented anywhere. It would be even nicer if the scripts could identify when the project is located outside that default directory and output a message alerting the user that this may cause it to silently fail.
/kind documentation
cc @pmundt thanks for helping find this!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
@damemi @seanmalloy as mentioned here, I've started working on this issue and will raise a new PR for the same. So, below is what I've understood w.r.t this issue,
So let' say I've two clones of kubernetes-sigs/descheduler
one is the upstream repo under ($GOPATH/src/sigs.k8s.io/descheduler
) and another one is the working repo which could be under $GOPATH/src/github.com/pravarag/descheduler
. Now when I run either make gen
or ./hack/update-generated-*.sh
from my current working directory i.e. ($GOPATH/src/github.com/pravarag/descheduler
), then it basically updates the packages under directory $GOPATH/src/sigs.k8s.io/descheduler
( as PRJ_PREFIX
is set to that path). But, what we want to achieve is if make gen
or any of ./hack/update-*.sh
scripts are run from outside the $GOPATH/sigs.k8s.io/descheduler
path, then it should update the packages within the current working directory and not the packages under clone of upstream? Please correct me if I'm wrong here.
@damemi @seanmalloy as mentioned here, I've started working on this issue and will raise a new PR for the same. So, below is what I've understood w.r.t this issue, So let' say I've two clones of
kubernetes-sigs/descheduler
one is the upstream repo under ($GOPATH/src/sigs.k8s.io/descheduler
) and another one is the working repo which could be under$GOPATH/src/github.com/pravarag/descheduler
. Now when I run eithermake gen
or./hack/update-generated-*.sh
from my current working directory i.e. ($GOPATH/src/github.com/pravarag/descheduler
), then it basically updates the packages under directory$GOPATH/src/sigs.k8s.io/descheduler
( asPRJ_PREFIX
is set to that path). But, what we want to achieve is ifmake gen
or any of./hack/update-*.sh
scripts are run from outside the$GOPATH/sigs.k8s.io/descheduler
path, then it should update the packages within the current working directory and not the packages under clone of upstream? Please correct me if I'm wrong here.
@pravarag yes that is correct. I also believe all of the hack
scripts and Makefile
targets should work if the repo lives outside of $GOPATH
.
@damemi do you think this is really a bug instead of a documentation issue? Since we are using Go Modules using $GOPATH
should not be required for anything. Do you agree?
@seanmalloy if it is not too much work for us to fix, I think it would be better treated as a bug. We should ideally not be dependent on GOPATH at this point
/kind bug /unkind documentation
/remove-kind documentation
@seanmalloy if it is not too much work for us to fix, I think it would be better treated as a bug. We should ideally not be dependent on GOPATH at this point
@damemi @seanmalloy as mentioned above, does that mean this issue will be handled as part of some different enhancement maybe? Or I can still continue to work towards it? If that's the case, I will need a little guidance towards not making changes w.r.t GOPATH
but how it can be handled differently.
@seanmalloy if it is not too much work for us to fix, I think it would be better treated as a bug. We should ideally not be dependent on GOPATH at this point
@damemi @seanmalloy as mentioned above, does that mean this issue will be handled as part of some different enhancement maybe? Or I can still continue to work towards it? If that's the case, I will need a little guidance towards not making changes w.r.t
GOPATH
but how it can be handled differently.
@pravarag you can work on this if you want to. I'm not aware of anyone else that is working on it. My suggestion for starting on this issue is to download the descheduler repo to ~/descheduler
and then run through all the make
targets to see what works and what doesn't work. It also might be useful to make a look at hack/lib/init.sh
to understand how it uses the PRJ_PREFIX
variable.
Let me know if you have specific questions and I can try to help answer them.
Thanks for your help!
@seanmalloy @damemi apologies for delay on completion of this issue. I'm currently recovering from Covid-19 under isolation and will start working on it as things settle 🙂
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale can we confirm if this is still an issue?
@damemi I was trying to make some changes w.r.t this PR submitted earlier but then lost track of it due to in-consistency. If you want I can look at it again but will have to check if this issue still persists.
@pravarag yeah double checking this could be good, sorry we lost track of your PR
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This is pending from long time, I'm willing to give it a last try and will update this.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten /lifecycle frozen
this is addressed, feel free to reopen if that's not the case /close
@a7i: Closing this issue.
In response to this:
this is addressed, feel free to reopen if that's not the case /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.