Write tests and enhancements for OpenAI API integration
Description
This issue aims to improve the reliability and functionality of our existing OpenAI API integration. We need to implement a robust set of tests to prevent regressions and identify areas for enhancement to leverage the API's features more effectively.
Acceptance Criteria:
- Unit/Integration Tests Implement unit tests for all helper functions and utilities related to API request preparation, response parsing, and error handling.
Implement integration tests that simulate calls to the actual OpenAI endpoints (e.g., /v1/chat/completions, /v1/responses). These tests should cover successful responses and various failure scenarios.
Enhancements
-
Implement robust support for optional request/response properties.
-
Improve streaming support for individual agents: Refactor the streaming logic to provide a smoother and more granular experience when an agent's response is streamed. This should include better differentiation and handling of streamed chunks belonging to different parts of the agent's reasoning or output.
Note that this is currently blocked until we finish #1182
Hey! If the unit tests haven’t been added yet, I’d like to implement them. Let me know if it’s free...
@Vasuk12 Absolutely, it's free. Go ahead and implement some tests.
@Vasuk12 Absolutely, it's free. Go ahead and implement some tests.
Hi @xjacka , I’ve opened a PR that adds the first unit test module for openai_input_to_beeai_message. This is mainly to confirm that the test structure, placement, and style match what the team expects before I continue adding coverage for the other helper functions. If you’re happy with the approach, I’ll expand the tests across the rest of the OpenAI adapter utilities :)
Thank you @Vasuk12 for your contribution. We would like to also have some E2E tests. Are you willing to take a look it?
Thank you @Vasuk12 for your contribution. We would like to also have some E2E tests. Are you willing to take a look it?
Most welcome! Just to clarify are the E2E tests meant to cover similar helper functions and utilities related to API request preparation, response parsing, and error handling?
I meant testing the communication between the server and the client.
I meant testing the communication between the server and the client.
Sure.
I meant testing the communication between the server and the client.
Hey @Tomas2D , I just submitted a PR for the e2e success and auth failure tests. While working on them, I spotted an issue: if you request a model that hasn't been registered (e.g., "non-existent-model"), the server crashes with a 500 Internal Server Error instead of returning a proper 404 Not Found. The issue is in beeai_framework/adapters/openai/serve/chat_completion/api.py, where the handler doesn't catch the RuntimeError raised by self._model_factory. It would be helpful if you could also confirm this error from your side as well and Let me know if you want me to raise a an issue about this and then PR to add a try/except block there!
@Vasuk12 Thanks for your insight, feel free to fix this bug directly in the existing PR with tests.
Hi, any other tests u would like to see implemented?