[peribolos] Adding sub team to a parent team fails during dry-run
@dprotaso it is trying to list team members of the Serving WG Leads which is newly added. Since you are running this without confirm it doesn't actually create the team, and can't list members. This doesn't have to do with my changes to slugs, and I am not convinced that this scenario would have ever worked properly. afaik the reason it is listing team members is due to the team having a parent, so this may be a less utilized scenario that hasn't been accounted for.
Note: this isn't really a blocker for you, all you need to do is actually create the team, and then everything will be fine.
Originally posted by @smg247 in https://github.com/kubernetes/test-infra/issues/26234#issuecomment-1127556164
/sig k8s-infra
Logs
{"component":"peribolos","file":"k8s.io/test-infra/prow/cmd/peribolos/main.go:1297","func":"main.configureTeamMembers.func1","level":"info","msg":"Set dprotaso as a member of api-core-wg-leads(API Core WG Leads)","severity":"info","time":"2022-05-13T20:35:00Z"}
[415](https://prow.knative.dev/view/gs/knative-prow/pr-logs/pull/knative_community/1067/pull-knative-peribolos/1525211406578749440#1:build-log.txt%3A415)
{"client":"github","component":"peribolos","file":"k8s.io/test-infra/prow/github/client.go:910","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"EditTeam({0 Serving WG Leads serving-wg-leads closed \u003cnil\u003e 0xc000181880 })","severity":"info","time":"2022-05-13T20:35:00Z"}
[416](https://prow.knative.dev/view/gs/knative-prow/pr-logs/pull/knative_community/1067/pull-knative-peribolos/1525211406578749440#1:build-log.txt%3A416)
{"client":"github","component":"peribolos","file":"k8s.io/test-infra/prow/github/client.go:910","func":"k8s.io/test-infra/prow/github.(*client).log","level":"info","msg":"ListTeamMembersBySlug(knative, serving-wg-leads, member)","severity":"info","time":"2022-05-13T20:35:00Z"}
[417](https://prow.knative.dev/view/gs/knative-prow/pr-logs/pull/knative_community/1067/pull-knative-peribolos/1525211406578749440#1:build-log.txt%3A417)
{"component":"peribolos","file":"k8s.io/test-infra/prow/cmd/peribolos/main.go:209","func":"main.main","level":"fatal","msg":"Configuration failed: failed to configure knative teams: failed to update Serving Writers child teams: failed to update Serving WG Leads members: failed to list serving-wg-leads(Serving WG Leads) members: return code not 2XX: 404 Not Found","severity":"fatal","time":"2022-05-13T20:35:10Z"}
cc @cjwagner @fejta
Are you offering to fix this edge case?
I'd probably make things worse ;)
I'm mostly inclined to just resolve this without fixing anything.
You have declared a non-existent team for weeks. Why is this an important scenario? Just create the group (which peribolos will do for you if you use the --confirm flag, right?).
My mental model for using this tool is
- Run dry-run to ensure config etc is valid
- Apply the changes when 1 succeeds
We do 1 in PR presubmits and 2 in post submits.
My impression is this tool is meant to declaratively manage GitHub orgs - using the GitHub UI as a work around feels at odds with that.
Likewise using the confirm flag prior to the dry run seems like a workaround
A dry run when you're declaring and using a team that doesn't exist isn't going to work though.
You could break it up into multiple PRs:
- a PR where you declare the new team but don't use it anywhere
- After merging (1) then you start using the team as expected.
This should work properly with dry runs
Alternatively you can disable the functionality that is not of interest: https://github.com/kubernetes/test-infra/blob/fb4c8c9bd8603ec9306bf3a59119274feb961f7f/prow/cmd/peribolos/main.go#L83-L102
I guess multiple PRs is a decent enough workaround. I'll leave this issue open in case someone else is looking to pick up the work.
Splitting things up into a separate PRs didn't work. It seems like --confirm=false is broken? I created a new issue: https://github.com/kubernetes/test-infra/issues/26542
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen