code-generator
code-generator copied to clipboard
the default value of GoHeaderFilePath is wrong
the problem is in this commit https://github.com/kubernetes/code-generator/commit/5bb9f2e1767e5cb98bb373649a1da739aa7cb4ff
before: the default GoHeaderFilePath is filepath.Join(args.DefaultSourceTree(), path.Join(reflect.TypeOf(empty{}).PkgPath(), "/../../hack/boilerplate.go.txt"))
after: the default GoHeaderFilePath is just filepath.Join(reflect.TypeOf(empty{}).PkgPath(), "/../../hack/boilerplate.go.txt") the args.DefaultSourceTree() is missing
use the latest code,the generate-groups.sh can only run in ${GOPATH}/k8s.io/code-generator/
otherwise will panic like this
F0812 02:47:30.269045 68831 deepcopy.go:131] Failed loading boilerplate: open : no such file or directory
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000010001, 0xc000aff680, 0x6c, 0xbc)
/Users/zhaohui/workspace/project/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1026 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x1619ba0, 0xc000000003, 0x0, 0x0, 0xc0000129a0, 0x0, 0x15f080c, 0xb, 0x83, 0x0)
/Users/zhaohui/workspace/project/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:975 +0x1f1
k8s.io/klog/v2.(*loggingT).printf(0x1619ba0, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x13d8852, 0x1e, 0xc0026122c0, 0x1, ...)
/Users/zhaohui/workspace/project/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:753 +0x19a
k8s.io/klog/v2.Fatalf(...)
/Users/zhaohui/workspace/project/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1514
k8s.io/gengo/examples/deepcopy-gen/generators.Packages(0xc0002e1810, 0xc00011d040, 0x13b4e22, 0x6, 0xc0002e1810)
/Users/zhaohui/workspace/project/go/pkg/mod/k8s.io/[email protected]/examples/deepcopy-gen/generators/deepcopy.go:131 +0x135
k8s.io/gengo/args.(*GeneratorArgs).Execute(0xc00011d040, 0xc0023bfe38, 0x13b4e22, 0x6, 0x13e4650, 0x0, 0x0)
/Users/zhaohui/workspace/project/go/pkg/mod/k8s.io/[email protected]/args/args.go:206 +0x1a9
main.main()
/Users/zhaohui/workspace/project/go/src/k8s.io/code-generator/cmd/deepcopy-gen/main.go:75 +0x42b
goroutine 6 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x1619ba0)
/Users/zhaohui/workspace/project/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:1169 +0x8b
created by k8s.io/klog/v2.init.0
/Users/zhaohui/workspace/project/go/pkg/mod/k8s.io/klog/[email protected]/klog.go:420 +0xdf
because the method util.BoilerplatePath() just return an empty string.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/kind bug
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Just a heads up that I believe I ran into this issue today.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I faced the same issue today. Any suggestions to fix it?
/assign @ndombroski
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Hi everyone, I believe this bug is no longer present after https://github.com/kubernetes/code-generator/commit/83929d026f32ba75896b72a9bbaae4ed3da6a4ae was merged and removed the boilerplate auto-setting behavior. Therefore I've closed my PR https://github.com/kubernetes/kubernetes/pull/111179
@2581543189 We can likely close this issue then.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.