flipt
flipt copied to clipboard
Add default/max limits for List* and Batch requests
Is your feature request related to a problem? Please describe.
Currently there are no default or enforced limits for requests that return multiple objects in their response, such as ListFlags, ListSegments or BatchEvaluate calls.
This creates a risk of crashing the server if a user has more resources than will fit in memory if they call/use one of these endpoints.
It also prevents us from being able to cache these objects (see #935)
Describe the solution you'd like
- Add default and max
limit
values for list/batch requests on the server - Change the UI pagination to use the
limit
+offset
params instead of the client side pagination it's using today
Attach an export
N/A
Additional context
Add any other context or screenshots about the feature request here.
Food for thought: gRPC docs advise the use of page_token
and next_page_token
:
https://cloud.google.com/apis/design/design_patterns#list_pagination
Sage advice. Since, this can allow different storage backends to potentially alter the pagination mechanism.
While a relative offset might be useful for one backend, an ID offset might be more appropriate in another.
Food for thought: gRPC docs advise the use of
page_token
andnext_page_token
: https://cloud.google.com/apis/design/design_patterns#list_paginationSage advice. Since, this can allow different storage backends to potentially alter the pagination mechanism.
While a relative offset might be useful for one backend, an ID offset might be more appropriate in another.
Love it. Great idea/advice. Much better to follow Google's best practices than trying to come up / use my own scheme. Will change the branch Im working on to use tokens instead of limit/offset
I'm changing this title to simply be 'update to use token based pagination instead of limit/offset' as I realized we cant enforce a default limit (and likely even a max) without it being a breaking API change, requiring a new v2 of the public API.
Since we haven't fully planned out all of the breaking changes in v2 API, Im going to defer on adding a max / default for pagination.
I still think its a good idea to do so, however it also has unattended consequences of breaking pagination in the UI, as currently all of the UI pagination is done client side.
tl;dr I'm going to re-create adding max/default
limits for pagination issue and tag it as v2.
Will proceed with token based pagination but also not remove the offset
pagination, simply deprecate it.
For #935 , I think we can go with an opt in approach, meaning that if you want caching of list responses you HAVE to include a limit as part of your query to protect you blowing up cache size, this limit could be as large as you like but its still a conscious choice that the user would be making when enabling list caching.
What do you think @GeorgeMac ?
That sounds great. Equally, since the client in the UI is controlled within this project, it is something that can be taken advantage of by the UI immediately.