go-fastly
go-fastly copied to clipboard
feat: allow parallelization of batch put item
In the spirit of #429, I suppose there is nothing that prevents us to parallelize BatchModifyKVStoreKey
calls. This will help us upload huge batches of data.
Note that from what've tested, huge parallelized batches seem to fail with 429 errors (but smalls are ok)
👋🏻 Thanks @shavounet for opening this PR.
You mentioned...
from what've tested, huge parallelized batches seem to fail with 429 errors (but smalls are ok)
Have you tested with this change, or were you referring to the old/current behaviour?
To try it, I made the change by copy/pasting this method into my own project. It fails fast with 429
errors with too much parallelization (like ~10 gorountines sending each in loop a few hundreds items), but it also fails a bit later with less concurrency (2 gorountine with loops of 10k items). For my test, the keys were new and unique.
(side note: I'm not seeing rate limit mentions in documentation, but maybe I'm just missing something)
It fails fast with 429 errors with too much parallelization
OK, no problem. That seems fine to me. It's the caller's responsibility (not the API client's) to respect the API's rate limit and to react to the information that is provided (i.e. RateLimitRemaining
and RateLimitReset
): https://github.com/fastly/go-fastly/blob/792278f3613e80d2768fca1f3b8086cc1c968bc0/fastly/errors.go#L335-L342
Sorry, just saw your edit note. I think it's a general API rate limit and not product specific...
https://www.fastly.com/documentation/reference/api/#rate-limiting
I'll look to get this merged later today and hopefully a new released published soon after.
I've missed that info sorry :sweat_smile:
That being said, while debugging, it seems that the SDK do not fill this information while batching, and neither does any of the contained error.
Do you know how the rate limit is triggered while batching ? (1 per API request, even if it contains thousands of lines, or 1 per row ?)
Thanks for your support :)
Do you know how the rate limit is triggered while batching ? (1 per API request, even if it contains thousands of lines, or 1 per row ?)
Great question! I don't know the answer I'm afraid, so I'll enquiry internally and find out for you 🙂
Thanks @shavounet for your patience.
We realise now there is a gap in our KV Store documentation, that we will fix (thanks for making us aware of this).
In short, there is a limit of 1000 write requests per second (and also a 1 RPS limit on non-unique keys).
These limits are unique to the KV Store system (hence the lack of values in the more generalised Fastly rate-limit response headers).
The KV Store team will be looking at how best to communicate this behaviour in future.
I'm just testing this PR locally by integrating it into the Fastly CLI, if all is well I'll merge this PR.
Nothing was broken in the integration with the Fastly CLI.
But just to be clear that this change wouldn't affect the Fastly CLI any way, because the CLI only calls the BatchModifyKVStoreKey
method once and we stream the provided data through it.
This change would be more effective for users writing their own Go scripts that use the go-fastly API clients directly, and as such can call the BatchModifyKVStoreKey
multiple times with different data streams.
This change is now released in: https://github.com/fastly/go-fastly/releases/tag/v9.1.0
Thanks a lot for your reactivity ! I'll check the release this afternoon.
Indeed, we didn't find the documentation about this KV store limit. Adjusting our script to throttle them a bit allowed me to successfully push ~3M of data in one KV store. I'm just wondering what happens if we send a batch of 30k items at once ? Will it automatically be throttled ? Will it reach the limit for the next 30s ? And what happens if we send 10 parallel batches ? (It seemed to work in my test on the first loop, so 300k items, but then it randomly failed...).
Thanks @shavounet for the update!
For now I think it might be best if you open a support ticket with your follow-up questions re: limits, as the support team will be able to answer your questions more accurately.