mem0 icon indicating copy to clipboard operation
mem0 copied to clipboard

Provide structured AzureOpenAI LLM and enable it in Graph Store

Open jof-leaders21 opened this issue 6 months ago • 1 comments

Description

Add a structured provider version for AzureOpenAI.

The OSS version of GraphMemory in the TypeScript-SDK has an indirect dependency on OpenAIStructuredLLM. Provide configuration for AzureOpenAIStructuredLLM for AzureOpenAI support.

AzureOpenAI is one of the few ways to use OpenAI GDPR compliant in the EU space. When trying to implement memory functionality for our Agents we need the GraphMemory to be able to function completely without dependency on OpenAIs version of the hosted models.

Apply limit on search.

Currently when running memory.search(input, { limit: 2 }) the limit has been ignored and all elements have been returned. Consider it from now on.

Type of change

Please delete options that are not relevant.

  • [x] New feature (non-breaking change which adds functionality)

How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

Please delete options that are not relevant.

  • [x] Test Script (please provide)
import dotenv from "dotenv";
import { Memory } from "mem0ai/oss";

dotenv.config();

const config = {
  enableGraph: true,
  llm: {
    provider: "azure_openai",
    config: {
      apiKey: process.env.LLM_AZURE_OPENAI_API_KEY || "",
      modelProperties: {
        endpoint: process.env.LLM_AZURE_OPENAI_ENDPOINT,
        deployment: "gpt-4o",
        modelName: "gpt-4o",
        apiVersion: "2025-03-01-preview",
      },
    },
  },
  embedder: {
    provider: "azure_openai",
    config: {
      apiKey: process.env.LLM_AZURE_OPENAI_API_KEY || "",
      model: "text-embedding-3-small",
      modelProperties: {
        endpoint: process.env.LLM_AZURE_OPENAI_ENDPOINT,
        deployment: "text-embedding-3-small",
        apiVersion: "2025-03-01-preview",
      },
    },
  },
  graphStore: {
    provider: "neo4j",
    config: {
      url: "bolt://localhost:7687",
      username: "neo4j",
      password: "password",
    },
    llm: {
      provider: "azure_openai_structured",
      config: {
        apiKey: process.env.LLM_AZURE_OPENAI_API_KEY || "",
        modelProperties: {
          endpoint: process.env.LLM_AZURE_OPENAI_ENDPOINT,
          deployment: "gpt-4o",
          modelName: "gpt-4o",
          apiVersion: "2025-03-01-preview",
        },
      },
    },
  },
};

const memory = new Memory(config);
const messages = [
  {
    role: "user",
    content: "Luna and Mochi are my cats.",
  },
  {
    role: "user",
    content:
      "Also Luna likes to play with the neighbors dog Biscuit but Mochi hates Biscuit.",
  },
  {
    role: "user",
    content:
      "My mother bought Biscuit a present for their birthday.",
  },
];
let result = await memory.add(messages, {
  userId: "alice",
  });

console.log(result);

console.log(await memory.search("Who is Biscuit?", { userId: "alice", limit: 2 }))

Checklist:

  • [x] My code follows the style guidelines of this project
  • [x] I have performed a self-review of my own code
  • [x] I have commented my code, particularly in hard-to-understand areas
  • [x] I have made corresponding changes to the documentation
  • [x] My changes generate no new warnings
  • [ ] I have added tests that prove my fix is effective or that my feature works
  • [x] New and existing unit tests pass locally with my changes
  • [x] Any dependent changes have been merged and published in downstream modules
  • [x] I have checked my code and corrected any misspellings

Maintainer Checklist

  • [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
  • [ ] Made sure Checks passed

jof-leaders21 avatar Jun 05 '25 13:06 jof-leaders21

CLA assistant check
All committers have signed the CLA.

CLAassistant avatar Jun 05 '25 13:06 CLAassistant