go-openai icon indicating copy to clipboard operation
go-openai copied to clipboard

Return response headers too

Open sofer-eg opened this issue 1 year ago • 5 comments

Hi! The response contains information about the limits, which can help avoid the 429 error. May be return 2 variables: response body and response headers?

sofer-eg avatar Jul 21 '23 13:07 sofer-eg

I think you can use the Error handling method to get the 429 error code in any CreateChatCompletion of go-openai and handle it. Here is an example:


stream, err := client.CreateChatCompletionStream(ctxgpt, req)
if err != nil {
	openaierr := OpenaiError(err)
	return
}

~~~~~~~~~~~~~~~~~~~~~

func OpenaiError(openaierr error) (err error) {
	var openaiError = &openai.APIError{}
	if errors.As(openaierr, &openaiError) {
		println("openai error:" + openaiError.Type)
		if "string" == reflect.TypeOf(openaiError.Code).String() {
			println("openai error:" + openaiError.Code.(string))
		}

		switch openaiError.HTTPStatusCode {
		case 400:
			if "string" == reflect.TypeOf(openaiError.Code).String() {
				if "content_filter" == openaiError.Code.(string) {
					return errors.New("内容违规,已被屏蔽,请遵守国家法律法规!")
				}
			}
			return errors.New("发送的数据异常:" + openaiError.Message)
		case 401:
			return errors.New("无效的身份验证,请联系管理员")
		case 404:
			return errors.New("模型不存在或者不支持:" + openaiError.Message)
		case 429:
			return errors.New("使用频率过快或者服务负载过大,请稍后再试!" + openaiError.Message)
		case 500:
			return errors.New("系统错误,请稍后再试或者联系管理员\n" + openaiError.Message)
		default:
			return errors.New("未知的错误:" + openaiError.Message)
		}
	}
	return openaierr
}


ZeroDeng01 avatar Jul 21 '23 15:07 ZeroDeng01

Yes, I can do that, but I would like to prevent the 429 error and call the api again after the limits are restored.

sofer-eg avatar Jul 24 '23 06:07 sofer-eg

@sofer-eg Thanks for suggestion. What specific information are you looking for? If possible, please provide detailed information following the Issue template.

vvatanabe avatar Jul 24 '23 09:07 vvatanabe


name: Feature request about: Add Headers in the response for the rate limiting title: 'Rate limiting headers in the response' labels: enhancement assignees: ''


Is your feature request related to a problem? Please describe. No

Describe the solution you'd like I would like to have in the response the current status of the rate limiting as described here: https://platform.openai.com/docs/guides/rate-limits/rate-limits-in-headers I need this data when it worked, but also when we reached the limit

Thank you!

Arvi89 avatar Aug 30 '23 08:08 Arvi89

image I did something like this locally, I add the rate limiting all the time in sendRequest, it works great for me, I'm not sure it's the best implementation though. But that's basically what I need :)

Arvi89 avatar Aug 31 '23 08:08 Arvi89