kcp
kcp copied to clipboard
Set up CI for kcp-dev/kubernetes
- [ ] Create a GitHub action
- [ ] run unit tests
- [ ] make sure e2e tests all compile
- [ ] (maybe?) run e2e tests
we should also have e2e tests with kcp -> kind (or multiple kind clusters)
@aojea what sort of tests do you have in mind for that scenario?
@aojea I suspect you're thinking of testing involving synchronization of resources between kcp and kind clusters. While this is definitely in the cards for the kcp-dev/kcp repo, the context of this issue is for validating the apiserver foundations of kcp (kcp-dev/kubernetes) rather than the controllers (like scheduling and sync) that will run on top of it.
Regarding testing with kind, we'll get there. I'm working on enabling testing synchronization between kcp and itself (logical clusters) with in-process syncers in https://github.com/kcp-dev/kcp/pull/636. Using logical clusters for now has the benefit of being cheaper to work with (and working without os container support) and giving us experience with and confidence in this kcp feature. Ideally those same tests that are developed against logical clusters will be targeted at kind clusters to provide comprehensive validation of api negotiation and kube controller interaction.
@aojea what sort of tests do you have in mind for that scenario?
I saw a demos folder that uses kind for some scenarios, I was thinking initially on having some automation to test that the demos always work to avoid regressions there, and also to guarantee that new users can run the demos without any problem ... I think that is a very bad experience to go to a project, try to run some demo, and that doesn't work or fail ...
@aojea I suspect you're thinking of testing involving synchronization of resources between kcp and kind clusters I'm working on enabling testing synchronization between kcp and itself (logical clusters) with in-process syncers in https://github.com/kcp-dev/kcp/pull/636. Using logical clusters for now has the benefit of being cheaper to work with (and working without os container support) and giving us experience with and confidence in this kcp feature. Ideally those same tests that are developed against logical clusters will be targeted at kind clusters to provide comprehensive validation of api negotiation and kube controller interaction.
yeah, the famous testing pyramid, I think that e2e is just another dimension at a higher level, once you start to have features that depend on the network you really need to exercise the network ... it is complementary to that. This is a good example on what I have in mind https://github.com/kcp-dev/kcp/tree/main/contrib/demo/ingress-script
For sure, validating cross-cluster networking will require multiple kind clusters. I imagine e2e could accept one or more --workload-cluster arguments, and a cross-cluster ingress test could skip if it didn't get the clusters it needed.
FWIW I think the provisioning of kcp and kind servers should be outside the scope of e2e setup. Having an easy way to deploy kcp and kind clusters in standard configurations will be useful for dev/test/demo so having it be implemented separate from e2e would seem to make sense. I've filed #674 in case you or anyone else is interested in making that happen.
I saw a demos folder that uses kind for some scenarios, I was thinking initially on having some automation to test that the demos always work to avoid regressions there
Makes sense, but out of scope for this issue, which is just about getting CI set up for the kcp fork of kubernetes.
FWIW I think the provisioning of kcp and kind servers should be outside the scope of e2e setup. Having an easy way to deploy kcp and kind clusters in standard configurations will be useful for dev/test/demo so having it be implemented separate from e2e would seem to make sense
My bad, I think that the e2e term here is differet for what I was think ... I agree with your conclusion
Right now I think we can:
- run unit tests [see note]
- run integration tests [see note]
- compile everything in the repo
- build a kind node image
- start a kind single-node cluster
- run e2e tests [see note]
note: ok to focus on subset of tests that I have validated for crdb, for now...
Clearing milestone to re-triage
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kcp-ci-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.