cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
make generate leave github.com directory behind which impacts make test
/kind bug
What steps did you take and what happened: forked repo and ran make generate then make test. Later failed because the former left a github.com directory that 'go test ./...' run by make test processed.
What did you expect to happen: make generate to clean up the github.com directory.
Environment:
- Cluster-api-provider-aws version: v7.0
- Kubernetes version: (use
kubectl version
): v1.19.2 - OS (e.g. from
/etc/os-release
): Ubuntu 20.04
This is a known issue with the code generation tooling. The easiest way to get around it is to clone the source code into your GOPATH as per the developer guide:
https://cluster-api-aws.sigs.k8s.io/development/development.html
Thanks, my mistake should have read development guide, I had missed it because I was expecting to find any relevant info in CONTRIBUTING.md. It is referenced in the README.md under title 'Tilt-based development environment' but because I am not currently using tilt I didn't read that!
We've tried for years to make this work in each use case. We get a PR to fix it when the repo is checked out in one location, that then breaks it when it is checked out in another.
Leave it open in case someone finds something that works in every case.
/priority backlog /lifecycle frozen
/area release
/triage accepted /milestone backlog
/remove-lifecycle frozen
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.