keep icon indicating copy to clipboard operation
keep copied to clipboard

[🐛 Bug]: AI Semi Automatic Correlation should return warning if the input is large

Open tuantran0910 opened this issue 9 months ago • 1 comments

When I use the AI Semi Automatic Correlation to create incidents for 50 alerts (each alert may have long description), I got this error in the logs:

{"worker_type": "uvicorn", "asctime": "2025-04-04 04:21:21,543", "message": "AI incident creation failed: Error code: 429 - {'error': {'message': 'Request too large for gpt-4o in organization org-P78GgLFuls46zPayJbKQygLo on tokens per min (TPM): Limit 30000, Requested 31347. The input or output tokens must be reduced in order to run successfully. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}", "levelname": "ERROR", "name": "keep.api.bl.ai_suggestion_bl", "filename": "ai_suggestion_bl.py", "otelTraceID": "1a7ee2b70dc598dd165fa82101e714b0", "otelSpanID": "91c955b9f1a39aea", "otelTraceSampled": false, "otelServiceName": "keep-api", "threadName": "MainThread", "process": 16, "module": "ai_suggestion_bl"}

However, in the frontend, it displayed like this:

Image

I think it's better to validate the input token length based on the model name (if possible) or just display the more useful error.

tuantran0910 avatar Apr 04 '25 04:04 tuantran0910

The issue you're encountering is related to the token limit being exceeded when using the AI Semi Automatic Correlation feature. The error message indicates that the request size exceeded the token per minute (TPM) limit for the model being used.

Currently, there isn't a mechanism in the codebase to pre-validate the token length before making a request to the OpenAI API [1]. Implementing such a validation could help prevent these errors by checking the input size against the model's token limits before sending the request. Additionally, improving the error handling in the frontend to display more informative messages could enhance the user experience.

To address this, you could consider:

  1. Adding a pre-validation step to check the token length based on the model's limits before making the API request.
  2. Updating the frontend to display more specific error messages that reflect the actual issue, such as indicating that the input size is too large.

These changes would help users understand the cause of the error and take corrective actions, such as reducing the input size.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Bug Report | Other

dosubot[bot] avatar Apr 04 '25 04:04 dosubot[bot]