FastAPI usage example
Hi Kenny, I was going through your discussion under Discussions section "How best to handle sessions behind web API #26". Do you have any example/ sample code / Link to guide me how to use FastAPIs with Atomic Agents? It will help me to complete my MVP. Regards, Gaurav
Not right now, sorry, but I'll try to add an example of this in a while... It really just boils down to dumping and loading the memory as discussed in https://github.com/BrainBlend-AI/atomic-agents/discussions/26
I am also looking for this example
Not sure if it might help you since it's the simplest example possible and most probably you've asked for more "production-ready" example but I think it could be useful for rookies like me who want to learn how to use AtomicAgents with FastAPI.
The example:
import google.generativeai.generative_models as genai
import instructor
from atomic_agents.agents.base_agent import (
BaseAgent,
BaseAgentConfig,
BaseAgentInputSchema,
BaseIOSchema,
)
from atomic_agents.lib.components.system_prompt_generator import SystemPromptGenerator
from fastapi import FastAPI, Body, status
from typing_extensions import Annotated
app = FastAPI()
def call(data: dict[str, str]) -> BaseIOSchema:
client = instructor.from_gemini(client=genai.GenerativeModel(model_name="gemini-2.5-pro"), use_async=False)
prompt_generator = SystemPromptGenerator(
background=[
"This assistant is a knowledgeable AI designed to be helpful, friendly, and informative.",
"It has a wide range of knowledge on various topics and can engage in diverse conversations."
],
steps=[
"Analyze the user's input to understand the context and intent.",
"Formulate a relevant and informative response based on the assistant's knowledge.",
"Generate 3 suggested follow-up questions for the user to explore the topic further."
],
output_instructions=[
"Provide clear, concise, and accurate information in response to user queries.",
"Maintain a friendly and professional tone throughout the conversation.",
"Conclude each response with 3 relevant suggested questions for the user."
]
)
agent = BaseAgent(
BaseAgentConfig(
client=client,
system_prompt_generator=prompt_generator
)
)
input_schema = BaseAgentInputSchema(chat_message=data["message"])
return agent.run(input_schema)
@app.post("/messages", status_code=status.HTTP_201_CREATED)
def post_message(data: Annotated[dict[str, str], Body(title="Body of the request")]):
result = call(data)
return {"result": result}
- Put it into your
main.py. - Replace Gemini related code with a provider of your choice.
- Run in the terminal:
fastapi dev main.py - Send a POST request to
localhost:8000/messageswith a body:{ "message": "<Your message>" } - Review results
Your next most probable steps would be:
- Add memory to the agent
- Load/dump chat history from/to some persistent storage, e. g. database, file, cache.
- Extend the workflow with another agent request/results processing