githubv4 icon indicating copy to clipboard operation
githubv4 copied to clipboard

rate limiting: how to ?

Open MichaelMure opened this issue 5 years ago • 5 comments

As far as I can tell, this package doesn't have any specific code path to handle the Github rate limiting.

It'd be nice to at least return a specific error on HTTP 429 so that retry/back-of could be implemented outside of this package. Ideally this would be implemented within githubv4.

MichaelMure avatar Feb 23 '20 18:02 MichaelMure

As far as I can tell, this package doesn't have any specific code path to handle the Github rate limiting.

That is true. However, note that you can always query the current rate limit by adding a rateLimit field to your query. See https://developer.github.com/v4/query/#ratelimit.

It'd be nice to at least return a specific error on HTTP 429 so that retry/back-of could be implemented outside of this package.

Yes, this should be done. It's being tracked by issue #38.

Ideally this would be implemented within githubv4.

Maybe, in the long term. First step is to implement some solution. Then see if it can be generalized well enough. If so, we can consider factoring it into githubv4.

GitHub API v3 package automatically detects when the returned GitHub error was due to rate limit being depleted, and computes how long until the rate limit will be reset. It's also smart enough not to talk to GitHub as long as the rate limit reset time hasn't passed yet. Hopefully it'll be possible to do similar things here with v4.

dmitshur avatar Feb 23 '20 23:02 dmitshur

@dmitshur I was interested as well and I have opened a post on the github community: https://github.community/t5/GitHub-API-Development-and/Documentation-about-rate-limit-error-in-V4/m-p/52770#M4508

It is not clear from the documentation what the API will return when the error will occur.

MatteoGioioso avatar Apr 06 '20 11:04 MatteoGioioso

FWIW, this is how it's done in git-bug: https://github.com/MichaelMure/git-bug/blob/a52c474f1184b59f109934be00f10c8088b890cd/bridge/github/client.go#L133-L134

MichaelMure avatar Nov 15 '22 18:11 MichaelMure

Error handling is addressed in this fork: https://github.com/jbrekelmans/go-githubv4

See README: Screenshot 2024-03-10 at 11 58 01 am

brekelj1 avatar Mar 10 '24 00:03 brekelj1

I see docs for this in https://docs.github.com/en/rest/using-the-rest-api/rate-limits-for-the-rest-api?apiVersion=2022-11-28#exceeding-the-rate-limit

I'm currently getting a secondary rate limit with a Retry-After header in my query (as well as sending me 502s). I've got a basic roundtripper to deal with it, but I'd appreciate more support from the library (in particular, I'd like to do easier OTEL Tracing integration).

Would you be open to a PR that returns a more strongly typed error from client.do?

Right now the code is:

	if resp.StatusCode != http.StatusOK {
		body, _ := io.ReadAll(resp.Body)
		return fmt.Errorf("non-200 OK status code: %v body: %q", resp.Status, body)
	}

I'd like to change that to:

	if resp.StatusCode != http.StatusOK {
		body, _ := io.ReadAll(resp.Body)
                // this struct would implement Error() like "non-200 OK status code: %v body: %q", e.Resp.Status, body"
                // but clients would also be able to switch on the error type and inspect the response themselves
                return NonHTTPStatusOKError{Resp: resp}
	}

This would let users implement rate-limiting without using http.RoundTripper

For reference, here's my RoundTripper code (NOT production quality, it doesn't implment good chunks of what they want):

type retryTransport struct{}

func (s *retryTransport) RoundTrip(r *http.Request) (*http.Response, error) {

	const MAX_TRIES = 3

	var resp *http.Response
	var err error

	for range MAX_TRIES {
		resp, err = http.DefaultTransport.RoundTrip(r)

		if resp.StatusCode == http.StatusOK {
			return resp, err
		}

		respBytes, _ := httputil.DumpResponse(resp, true)
		fmt.Printf("%s\n", respBytes)

		// check for Bad Gateway
		if resp.StatusCode == http.StatusBadGateway {
			fmt.Printf("ERROR: Bad Gateway\n")
			toSleep := 65 * time.Second
			fmt.Printf("Sleeping for %v\n", toSleep)
			time.Sleep(toSleep)
			continue
		}

		// check for Retry-After header
		retryAfter := resp.Header.Get("Retry-After")
		if retryAfter != "" {
			toWait, err := strconv.Atoi(retryAfter)
			if err != nil {
				panic("could not convert Retry-After header to int " + retryAfter + err.Error())
			}

			toSleep := time.Duration(toWait+5) * time.Second
			fmt.Printf("Sleeping for %v\n", toSleep)
			time.Sleep(toSleep)
			continue
		}

		panic("unknown HTTP error")
	}

	fmt.Printf("Max retries failed\n")

	return resp, err
}

And use it like this:

	src := oauth2.StaticTokenSource(
		&oauth2.Token{AccessToken: args.GitHubToken},
	)

	oauth2Client := http.Client{
		Transport: &retryTransport{},
	}
	oauth2Context := context.WithValue(context.Background(), oauth2.HTTPClient, &oauth2Client)

	httpClient := oauth2.NewClient(oauth2Context, src)

	client := githubv4.NewClient(httpClient)

bkane-msft avatar Jun 28 '24 21:06 bkane-msft