kcp
kcp copied to clipboard
🌱 e2e/apiexportendpointslice: use SharedKcpServer
Summary
it is possible to use a shared server because the
scheduler skips shards annotated with experimental.core.kcp.io/unschedulable
Related issue(s)
Fixes #
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign jmprusi for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
This looks good to me. I just have a couple of points:
Can you please amend the comment here? https://github.com/kcp-dev/kcp/blob/73cc8efef729522a1dc30f1e7318559982817460/test/e2e/reconciler/apiexportendpointslice/apiexportendpointslice_test.go#L268-L270
This could be a separate test function: https://github.com/kcp-dev/kcp/blob/73cc8efef729522a1dc30f1e7318559982817460/test/e2e/reconciler/apiexportendpointslice/apiexportendpointslice_test.go#L342-L382 It was not the case previously due to the cost of setting up the private environment
Updated the comment. IMO there is no need for extracting logic to a helper function. I like when tests are self-contained and self-descriptive.
Updated the comment. IMO there is no need for extracting logic to a helper function. I like when tests are self-contained and self-descriptive.
I did not mean a helper function. I meant a completely separate test function as it is not directly related to the tests part of the same function. It is not critical, just thinking it may be nicer.
I did not mean a helper function. I meant a completely separate test function as it is not directly related to the tests part of the same function. It is not critical, just thinking it may be nicer.
ah, this could be a default (happy-path) scenario I always wanted, yeah I can do that :)
@kcp-dev/kcp-contributors lets pick this up?
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kcp-ci-bot: Closed this PR.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.