apidash icon indicating copy to clipboard operation
apidash copied to clipboard

AI Agent for API Testing & Tool Generation

Open ashitaprasad opened this issue 9 months ago • 14 comments
trafficstars

Tell us about the task you want to perform and are unable to do so because the feature is not available

Develop an AI Agent which leverages the power of Large Language Models (LLMs) to automate and enhance the process of testing APIs. Also, simplify the process of converting APIs into structured tool definitions to enable seamless integration with popular AI agent frameworks like crewAI, smolagents, pydantic-ai, langgraph, etc.

Traditional API testing involves manually crafting requests, validating responses, and writing test cases. However, AI Agents can significantly streamline this process by generating test cases, validating API responses against expected outputs, and even suggesting improvements based on API documentation. Developers can describe test scenarios in natural language, and the agent can automatically generates API requests, parameter variations, and edge cases. It can also interpret API responses, checking for correctness, consistency, and performance benchmarks. This reduces manual effort while increasing coverage and efficiency, making API testing smarter and more efficient.

You are also required to prepare benchmark dataset & evaluations so that the right backend LLM can be selected for the end user.

ashitaprasad avatar Feb 23 '25 16:02 ashitaprasad

@ashitaprasad This is an amazing idea! Automating API testing with LLMs can be a game-changer, making the process smarter and more efficient. I’ve been working on AI-powered automation and API workflows, and this aligns perfectly with what I’ve explored.

Having experience with open-source contributions and around 10-15 PR merged across 2-3 organisations and projects involving AI agents, API automation, and scalable architectures, I’d love to dive deeper into this! Also, integrating structured tool definitions with crewAI, smolagents, pydantic-ai, and langgraph is something I’m really excited about.

Let me know if I can start researching and create flow for this>

akshayw1 avatar Feb 27 '25 21:02 akshayw1

@akshayw1 Sure go ahead 👍

ashitaprasad avatar Feb 27 '25 21:02 ashitaprasad

I have prepared the details doc for same with artitecture and poc ready, Is it possible to discuss @ashitaprasad

akshayw1 avatar Feb 28 '25 05:02 akshayw1

?? @ashitaprasad

akshayw1 avatar Mar 01 '25 07:03 akshayw1

@ashitaprasad Hi ! I’m super excited about this AI Agent project for GSoC 2025 . As a Python developer with competitive programming experience, I’d love to contribute, especially on test case generation or benchmarking. @akshayw1, great to see your progress . could I take a look at your doc to align my efforts? Any specific areas you suggest I focus on? Looking forward to collaborating !

MehrazRumman avatar Mar 01 '25 13:03 MehrazRumman

I have prepared the details doc for same with artitecture and poc ready, Is it possible to discuss @ashitaprasad

@akshayw1 We have updated the application guide here where you can learn how to share your idea details with architecture and implementation plan and get feedback.

ashitaprasad avatar Mar 01 '25 15:03 ashitaprasad

@ashitaprasad Hi ! I’m super excited about this AI Agent project for GSoC 2025 . As a Python developer with competitive programming experience, I’d love to contribute, especially on test case generation or benchmarking. @akshayw1, great to see your progress . could I take a look at your doc to align my efforts? Any specific areas you suggest I focus on? Looking forward to collaborating !

@MehrazRumman GSoC requires you to apply individually and not collaborate with other candidates. You can go through the updated application guide here to learn how you can share your ideas and get feedback.

ashitaprasad avatar Mar 01 '25 15:03 ashitaprasad

@ashitaprasad Thank you so much for the response !

MehrazRumman avatar Mar 01 '25 16:03 MehrazRumman

Hi @ashitaprasad,

I’ve created a PR with the initial idea for the AI Agent for API Testing & Tool Generation.

Could you kindly review the PR and share any feedback on the architecture or improvements? Your insights would be really helpful in refining the approach.

Thanks!

akshayw1 avatar Mar 02 '25 14:03 akshayw1

Hi @ashitaprasad,

I’ve created a PR with the initial idea for the AI Agent for API Testing & Tool Generation.

Could you kindly review the PR and share any feedback on the architecture or improvements? Your insights would be really helpful in refining the approach.

Thanks!

@akshayw1 Reviewed and added feedback. You can start by solving this issue - https://github.com/foss42/apidash/issues/121

ashitaprasad avatar Mar 03 '25 01:03 ashitaprasad

I am a love to contribute in this can you tell me how can i do it as well.

Harsh741334 avatar Mar 23 '25 08:03 Harsh741334

i am good with AI and dsa

Harsh741334 avatar Mar 23 '25 08:03 Harsh741334

@Harsh741334 You can submit your proposal in the GSoC website as the application has already opened.

animator avatar Mar 30 '25 12:03 animator

Tell us about the task you want to perform and are unable to do so because the feature is not available

Develop an AI Agent which leverages the power of Large Language Models (LLMs) to automate and enhance the process of testing APIs. Also, simplify the process of converting APIs into structured tool definitions to enable seamless integration with popular AI agent frameworks like crewAI, smolagents, pydantic-ai, langgraph, etc.

Traditional API testing involves manually crafting requests, validating responses, and writing test cases. However, AI Agents can significantly streamline this process by generating test cases, validating API responses against expected outputs, and even suggesting improvements based on API documentation. Developers can describe test scenarios in natural language, and the agent can automatically generates API requests, parameter variations, and edge cases. It can also interpret API responses, checking for correctness, consistency, and performance benchmarks. This reduces manual effort while increasing coverage and efficiency, making API testing smarter and more efficient.

You are also required to prepare benchmark dataset & evaluations so that the right backend LLM can be selected for the end user.

Hi @ashitaprasad , I also want to contribute to this project, i have already made prototype could. you please review it

bbl-sh avatar Apr 08 '25 17:04 bbl-sh