descheduler icon indicating copy to clipboard operation
descheduler copied to clipboard

Hack scripts assume "sigs.k8s.io/descheduler" project root unless PRJ_PREFIX is set

Open damemi opened this issue 4 years ago • 27 comments

When running hack scripts (or make gen, which calls them) to generate conversions, etc, outside a project root that matches what is set in hack/lib/init.sh the scripts will silently fail, generating no output, with no indication why. This can be easily fixed by setting PRJ_PREFIX, but this is not documented anywhere. It would be even nicer if the scripts could identify when the project is located outside that default directory and output a message alerting the user that this may cause it to silently fail.

/kind documentation

damemi avatar Jun 03 '20 15:06 damemi

cc @pmundt thanks for helping find this!

damemi avatar Jun 03 '20 15:06 damemi

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Sep 01 '20 16:09 fejta-bot

/remove-lifecycle stale

seanmalloy avatar Sep 01 '20 16:09 seanmalloy

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Nov 30 '20 16:11 fejta-bot

/remove-lifecycle stale

seanmalloy avatar Nov 30 '20 16:11 seanmalloy

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Feb 28 '21 17:02 fejta-bot

/remove-lifecycle stale

seanmalloy avatar Mar 01 '21 15:03 seanmalloy

@damemi @seanmalloy as mentioned here, I've started working on this issue and will raise a new PR for the same. So, below is what I've understood w.r.t this issue, So let' say I've two clones of kubernetes-sigs/descheduler one is the upstream repo under ($GOPATH/src/sigs.k8s.io/descheduler) and another one is the working repo which could be under $GOPATH/src/github.com/pravarag/descheduler. Now when I run either make gen or ./hack/update-generated-*.sh from my current working directory i.e. ($GOPATH/src/github.com/pravarag/descheduler), then it basically updates the packages under directory $GOPATH/src/sigs.k8s.io/descheduler ( as PRJ_PREFIX is set to that path). But, what we want to achieve is if make gen or any of ./hack/update-*.sh scripts are run from outside the $GOPATH/sigs.k8s.io/descheduler path, then it should update the packages within the current working directory and not the packages under clone of upstream? Please correct me if I'm wrong here.

pravarag avatar Apr 09 '21 13:04 pravarag

@damemi @seanmalloy as mentioned here, I've started working on this issue and will raise a new PR for the same. So, below is what I've understood w.r.t this issue, So let' say I've two clones of kubernetes-sigs/descheduler one is the upstream repo under ($GOPATH/src/sigs.k8s.io/descheduler) and another one is the working repo which could be under $GOPATH/src/github.com/pravarag/descheduler. Now when I run either make gen or ./hack/update-generated-*.sh from my current working directory i.e. ($GOPATH/src/github.com/pravarag/descheduler), then it basically updates the packages under directory $GOPATH/src/sigs.k8s.io/descheduler ( as PRJ_PREFIX is set to that path). But, what we want to achieve is if make gen or any of ./hack/update-*.sh scripts are run from outside the $GOPATH/sigs.k8s.io/descheduler path, then it should update the packages within the current working directory and not the packages under clone of upstream? Please correct me if I'm wrong here.

@pravarag yes that is correct. I also believe all of the hack scripts and Makefile targets should work if the repo lives outside of $GOPATH.

@damemi do you think this is really a bug instead of a documentation issue? Since we are using Go Modules using $GOPATH should not be required for anything. Do you agree?

seanmalloy avatar Apr 09 '21 17:04 seanmalloy

@seanmalloy if it is not too much work for us to fix, I think it would be better treated as a bug. We should ideally not be dependent on GOPATH at this point

damemi avatar Apr 12 '21 20:04 damemi

/kind bug /unkind documentation

seanmalloy avatar Apr 13 '21 03:04 seanmalloy

/remove-kind documentation

seanmalloy avatar Apr 13 '21 04:04 seanmalloy

@seanmalloy if it is not too much work for us to fix, I think it would be better treated as a bug. We should ideally not be dependent on GOPATH at this point

@damemi @seanmalloy as mentioned above, does that mean this issue will be handled as part of some different enhancement maybe? Or I can still continue to work towards it? If that's the case, I will need a little guidance towards not making changes w.r.t GOPATH but how it can be handled differently.

pravarag avatar Apr 14 '21 10:04 pravarag

@seanmalloy if it is not too much work for us to fix, I think it would be better treated as a bug. We should ideally not be dependent on GOPATH at this point

@damemi @seanmalloy as mentioned above, does that mean this issue will be handled as part of some different enhancement maybe? Or I can still continue to work towards it? If that's the case, I will need a little guidance towards not making changes w.r.t GOPATH but how it can be handled differently.

@pravarag you can work on this if you want to. I'm not aware of anyone else that is working on it. My suggestion for starting on this issue is to download the descheduler repo to ~/descheduler and then run through all the make targets to see what works and what doesn't work. It also might be useful to make a look at hack/lib/init.sh to understand how it uses the PRJ_PREFIX variable.

Let me know if you have specific questions and I can try to help answer them.

Thanks for your help!

seanmalloy avatar Apr 15 '21 02:04 seanmalloy

@seanmalloy @damemi apologies for delay on completion of this issue. I'm currently recovering from Covid-19 under isolation and will start working on it as things settle 🙂

pravarag avatar May 14 '21 13:05 pravarag

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 12 '21 14:08 k8s-triage-robot

/remove-lifecycle stale

pravarag avatar Aug 12 '21 14:08 pravarag

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 10 '21 15:11 k8s-triage-robot

/remove-lifecycle stale can we confirm if this is still an issue?

damemi avatar Nov 10 '21 16:11 damemi

@damemi I was trying to make some changes w.r.t this PR submitted earlier but then lost track of it due to in-consistency. If you want I can look at it again but will have to check if this issue still persists.

pravarag avatar Nov 11 '21 08:11 pravarag

@pravarag yeah double checking this could be good, sorry we lost track of your PR

damemi avatar Nov 11 '21 13:11 damemi

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 09 '22 14:02 k8s-triage-robot

/remove-lifecycle stale

pravarag avatar Feb 09 '22 15:02 pravarag

This is pending from long time, I'm willing to give it a last try and will update this.

pravarag avatar Feb 09 '22 15:02 pravarag

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 10 '22 15:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jun 09 '22 16:06 k8s-triage-robot

/remove-lifecycle rotten /lifecycle frozen

damemi avatar Jun 09 '22 16:06 damemi

this is addressed, feel free to reopen if that's not the case /close

a7i avatar Oct 25 '23 03:10 a7i

@a7i: Closing this issue.

In response to this:

this is addressed, feel free to reopen if that's not the case /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 25 '23 03:10 k8s-ci-robot