atomic-agents icon indicating copy to clipboard operation
atomic-agents copied to clipboard

FastAPI usage example

Open ieea opened this issue 1 year ago • 2 comments

Hi Kenny, I was going through your discussion under Discussions section "How best to handle sessions behind web API #26". Do you have any example/ sample code / Link to guide me how to use FastAPIs with Atomic Agents? It will help me to complete my MVP. Regards, Gaurav

ieea avatar Nov 19 '24 06:11 ieea

Not right now, sorry, but I'll try to add an example of this in a while... It really just boils down to dumping and loading the memory as discussed in https://github.com/BrainBlend-AI/atomic-agents/discussions/26

KennyVaneetvelde avatar Nov 25 '24 14:11 KennyVaneetvelde

I am also looking for this example

aboutte avatar Jan 02 '25 22:01 aboutte

Not sure if it might help you since it's the simplest example possible and most probably you've asked for more "production-ready" example but I think it could be useful for rookies like me who want to learn how to use AtomicAgents with FastAPI.

The example:

import google.generativeai.generative_models as genai
import instructor
from atomic_agents.agents.base_agent import (
    BaseAgent,
    BaseAgentConfig,
    BaseAgentInputSchema,
    BaseIOSchema,
)
from atomic_agents.lib.components.system_prompt_generator import SystemPromptGenerator
from fastapi import FastAPI, Body, status
from typing_extensions import Annotated

app = FastAPI()


def call(data: dict[str, str]) -> BaseIOSchema:
    client = instructor.from_gemini(client=genai.GenerativeModel(model_name="gemini-2.5-pro"), use_async=False)

    prompt_generator = SystemPromptGenerator(
        background=[
            "This assistant is a knowledgeable AI designed to be helpful, friendly, and informative.",
            "It has a wide range of knowledge on various topics and can engage in diverse conversations."
        ],
        steps=[
            "Analyze the user's input to understand the context and intent.",
            "Formulate a relevant and informative response based on the assistant's knowledge.",
            "Generate 3 suggested follow-up questions for the user to explore the topic further."
        ],
        output_instructions=[
            "Provide clear, concise, and accurate information in response to user queries.",
            "Maintain a friendly and professional tone throughout the conversation.",
            "Conclude each response with 3 relevant suggested questions for the user."
        ]
    )

    agent = BaseAgent(
        BaseAgentConfig(
            client=client,
            system_prompt_generator=prompt_generator
        )
    )

    input_schema = BaseAgentInputSchema(chat_message=data["message"])

    return agent.run(input_schema)


@app.post("/messages", status_code=status.HTTP_201_CREATED)
def post_message(data: Annotated[dict[str, str], Body(title="Body of the request")]):
    result = call(data)

    return {"result": result}

  • Put it into your main.py.
  • Replace Gemini related code with a provider of your choice.
  • Run in the terminal: fastapi dev main.py
  • Send a POST request to localhost:8000/messages with a body: { "message": "<Your message>" }
  • Review results

Your next most probable steps would be:

  • Add memory to the agent
  • Load/dump chat history from/to some persistent storage, e. g. database, file, cache.
  • Extend the workflow with another agent request/results processing

ar2em1s avatar Jul 04 '25 22:07 ar2em1s