covid-api
covid-api copied to clipboard
Change default rows limit to 1000
I'm still not sure how we should handle the limit
parameter. By default is now 1000
, before it was 100
. The problem is:
- if we leave the limit on by default, the users may not know about it and they risk of not getting on the data
- if we leave it off by default they will get thousands of records
Probably the best option would be to implement pagination, but I've no idea how to do it with FastAPI. Maybe I should have a look.
Update: this doesn't look good 😓

The first problem could be mitigated by using some kind of hypermedia standard, like HAL.
It would mean wrapping the response in an object and including something like this:
"_links": {
"self": { "href": "/v1/jh/daily-reports/?offset=1000&limit=1000" },
"next": { "href": "/v1/jh/daily-reports/?offset=2000&limit=1000" },
"previous": { "href": "/v1/jh/daily-reports/?offset=0&limit=1000" }
}
Or we could include more documentation around how to paginate?
The first problem could be mitigated by using some kind of hypermedia standard, like HAL.
It would mean wrapping the response in an object and including something like this:
"_links": { "self": { "href": "/v1/jh/daily-reports/?offset=1000&limit=1000" }, "next": { "href": "/v1/jh/daily-reports/?offset=2000&limit=1000" }, "previous": { "href": "/v1/jh/daily-reports/?offset=0&limit=1000" } }
Or we could include more documentation around how to paginate?
this could be a good option. I was wondering if there is any external library to do this in a semi automated way (like Django Rest Framework does) or if we have to implement it from scratch 🤔
p.s: as a temporary workaround, I would just increase the default limit to 1000 and make it well clear in the README that if some results are missing, they should override that value with a bigger value. For now we don't have too many rows and we can afford if they override the limit.
What do you think?
It seems like we would have to implement it ourselves, but we are already implementing pagination ourselves with limit
and offset
, so I guess the work involved would be generating URLs from that and extending the pydantic models.
Since we already support offset
what about adding an example to our readme that pages through everything with a page size of 1000?
It seems like we would have to implement it ourselves, but we are already implementing pagination ourselves with
limit
andoffset
, so I guess the work involved would be generating URLs from that and extending the pydantic models.Since we already support
offset
what about adding an example to our readme that pages through everything with a page size of 1000?
would an example be enough without those previous/next links? Also, the user cannot know upfront how many rows will be returned by the query. If there was a "next" link available, one could request that one and ask for 1000 results every time, until next is None
, but without link I'm not sure how the user would do
p.s: I found this https://fastapi-contrib.readthedocs.io/en/latest/_modules/fastapi_contrib/pagination.html but I'm not sure how it should be used. Probably it's worth investigating.