bee-agent-framework icon indicating copy to clipboard operation
bee-agent-framework copied to clipboard

Write tests and enhancements for OpenAI API integration

Open xjacka opened this issue 4 months ago • 9 comments

Description

This issue aims to improve the reliability and functionality of our existing OpenAI API integration. We need to implement a robust set of tests to prevent regressions and identify areas for enhancement to leverage the API's features more effectively.

Acceptance Criteria:

  1. Unit/Integration Tests Implement unit tests for all helper functions and utilities related to API request preparation, response parsing, and error handling.

Implement integration tests that simulate calls to the actual OpenAI endpoints (e.g., /v1/chat/completions, /v1/responses). These tests should cover successful responses and various failure scenarios.

Enhancements

  • Implement robust support for optional request/response properties.

  • Improve streaming support for individual agents: Refactor the streaming logic to provide a smoother and more granular experience when an agent's response is streamed. This should include better differentiation and handling of streamed chunks belonging to different parts of the agent's reasoning or output.

xjacka avatar Oct 22 '25 11:10 xjacka

Note that this is currently blocked until we finish #1182

Tomas2D avatar Oct 23 '25 08:10 Tomas2D

Hey! If the unit tests haven’t been added yet, I’d like to implement them. Let me know if it’s free...

Vasuk12 avatar Nov 10 '25 09:11 Vasuk12

@Vasuk12 Absolutely, it's free. Go ahead and implement some tests.

xjacka avatar Nov 11 '25 10:11 xjacka

@Vasuk12 Absolutely, it's free. Go ahead and implement some tests.

Hi @xjacka , I’ve opened a PR that adds the first unit test module for openai_input_to_beeai_message. This is mainly to confirm that the test structure, placement, and style match what the team expects before I continue adding coverage for the other helper functions. If you’re happy with the approach, I’ll expand the tests across the rest of the OpenAI adapter utilities :)

Vasuk12 avatar Nov 11 '25 17:11 Vasuk12

Thank you @Vasuk12 for your contribution. We would like to also have some E2E tests. Are you willing to take a look it?

Tomas2D avatar Nov 12 '25 13:11 Tomas2D

Thank you @Vasuk12 for your contribution. We would like to also have some E2E tests. Are you willing to take a look it?

Most welcome! Just to clarify are the E2E tests meant to cover similar helper functions and utilities related to API request preparation, response parsing, and error handling?

Vasuk12 avatar Nov 12 '25 13:11 Vasuk12

I meant testing the communication between the server and the client.

Tomas2D avatar Nov 13 '25 16:11 Tomas2D

I meant testing the communication between the server and the client.

Sure.

Vasuk12 avatar Nov 19 '25 02:11 Vasuk12

I meant testing the communication between the server and the client.

Hey @Tomas2D , I just submitted a PR for the e2e success and auth failure tests. While working on them, I spotted an issue: if you request a model that hasn't been registered (e.g., "non-existent-model"), the server crashes with a 500 Internal Server Error instead of returning a proper 404 Not Found. The issue is in beeai_framework/adapters/openai/serve/chat_completion/api.py, where the handler doesn't catch the RuntimeError raised by self._model_factory. It would be helpful if you could also confirm this error from your side as well and Let me know if you want me to raise a an issue about this and then PR to add a try/except block there!

Vasuk12 avatar Nov 26 '25 20:11 Vasuk12

@Vasuk12 Thanks for your insight, feel free to fix this bug directly in the existing PR with tests.

xjacka avatar Dec 02 '25 12:12 xjacka

Hi, any other tests u would like to see implemented?

Vasuk12 avatar Dec 10 '25 08:12 Vasuk12