Umbrella Issue: refactor commands to split flags from options
Currently most or even all of the commands have an *Options structure holding all of the data needed to run a command and three main methods Complete (finishing *Options struct), Validate (validating *Options struct) and Run (actual code behind the command). The current problem is that these *Options struct is tightly coupled with flags, but we'd like to split these.
A perfect example is wait command.
This issue should serve as a synchronization point.
/cc @eddiezane @seans3
/triage accepted
/remove-help /remote-good-first-issue
Holding off on this until we scope better and trial a few to confirm it's what we want.
@soltysh looking at logs and get
@rikatz will take basic create
@KnVerey to look at drain or describe
@eddiezane will look at a simple config command
/assign So I wont forget :)
Will do create with @rikatz synchronously
/assign I'll try to do for apply command
/assign
i can do attach.
A simple summary:
- [x] - wait ✅ a perfect example
- [ ] - [WIP]create @lauchokyip & rikatz https://github.com/kubernetes/kubernetes/pull/101736
- [ ] - logs and get @soltysh
- [ ] - apply @SaiHarshaK
- [ ] - config @eddiezane
- [ ] - drain/describe @KnVerey
- [ ] - attach @ihcsim
- [ ] - scale @BigaDev
@BigaDev will take a look at scale
@pacoxu thanks for the summary!
I want to make sure folks understand that this will be a bit of an iteration process while we dial in exactly what we want these to look like. Expect some refactors and potential rewrites.
/assign
scale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/remove-kind design
/kind feature
kind/design is migrated to kind/feature, see https://github.com/kubernetes/community/issues/6144 for more details
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale On Jan 9, 2022, 06:32 -0500, Kubernetes Triage Robot @.***>, wrote:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
• After 90d of inactivity, lifecycle/stale is applied • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed
You can:
• Mark this issue or PR as fresh with /remove-lifecycle stale • Mark this issue or PR as rotten with /lifecycle rotten • Close this issue or PR with /close • Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale — Reply to this email directly, view it on GitHub, or unsubscribe. Triage notifications on the go with GitHub Mobile for iOS or Android. You are receiving this because you were mentioned.Message ID: @.***>
/lifecycle frozen Considering this is a long term tracking issue :)
/assign
I'll refactor label :)
@KnVerey Have you started to refactor the describe command ?
@soltysh Based on the slack conversation, I'm picking up the describe command, thanks!
cc @KnVerey
/assign
Refactoring describe command
I'll refactor explain :)
Will do diff next!
Hey! I was wondering if you guys needed more help on this issue. I'm happy to pick up a command and start working on it.
/assign
Talked to owner of issue. Grabbing drain to refactor.
I'll refactor create
/assign
@harshitasao , I think I finished create a while ago (https://github.com/kubernetes/kubernetes/pull/101736), just waiting for review, would you be able to pick other command to refactor instead?
@brunopecampos: GitHub didn't allow me to assign the following users: brunopecampos.
Note that only kubernetes members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/assign
I'll do diff
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.