csi-test
csi-test copied to clipboard
Strict check of pagination in listsnapshot() test code when pagination is an optional value in the spec.
// A token to specify where to start paginating. Set this field to
// `next_token` returned by a previous `ListSnapshots` call to get the
// next page of entries. This field is OPTIONAL.
// An empty string is equal to an unspecified field value.
string starting_token = 2;
// Identity information for the source volume. This field is OPTIONAL.
// It can be used to list snapshots by volume.
string source_volume_id = 3;
The pagination is an optional param in CSI spec, however CSI tests are failing if driver dont implement support for pagination.
For ex:
[Fail] ListSnapshots [Controller Server] [It] should return next token when a limited number of entries are requested
/../src/github.com/gluster/gluster-csi-driver/vendor/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:1352
Expected result: The test should not fail if CSI driver dont have capabilities to support pagination.
/assign @lpabon
I will help fix it
@wackxu Thanks , please share if you have a PR.
@wackxu are you still working on this?
/help
@msau42: This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The field is optional and so is the support of listing snapshots, but the token support is not optional for drivers that report the LIST_SNAPSHOTS capability. So any driver that supports listing must also support pagination.
The field is optional because you may not care about pagination and want everything returned in one call. But even if you want pagination, it cannot be present on the first request, since you don't have a value yet.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen /lifecycle frozen
@msau42: Reopened this issue.
In response to this:
/reopen /lifecycle frozen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.