BireleyX
BireleyX
@frederikhendrix hi. here's the demo code: ``` import os import asyncio import glob import time from lightrag import LightRAG, QueryParam, utils from lightrag.utils import EmbeddingFunc from lightrag.api import config import...
the code works fine when using default storage settings: LIGHTRAG_KV_STORAGE=JsonKVStorage LIGHTRAG_VECTOR_STORAGE=NanoVectorDBStorage LIGHTRAG_GRAPH_STORAGE=NetworkXStorage LIGHTRAG_DOC_STATUS_STORAGE=JsonDocStatusStorage
it should work. i've been using azure 4o-mini and 4o with lightrag for months now. check your api key and endpoint. i've just deployed o3-mini in azure and testing it...
I used it under 1.2.6 and just today I updated to 1.3.1. testing both o3-mini and gpt-4o-mini to regenerate my database. works ok for both. I also used lightrag_azure_openai_demo.py to...
here is my .env file: (take note, i deployed gpt-4o-mini model with deployment name "gpt-4o-mini" in Azure) ``` ### This is sample file of .env ### Server Configuration # HOST=0.0.0.0...
> The LLM_BINDING environment variable controls the LLM API mode. For azure openai, you should set this: > > ``` > LLM_BINDING=azure_openai > ``` hmm.. I tried running with that...
seems I'm mistaken. I reused my terminal environment when running the tests. and initially i did set LLM_BINDING and EMBEDDING_BINDING, but the demo code used ``` load_dotenv() ``` so the...
> load_dotenv() ensures the OS environment variable takes precedence over the .env file configuration. actually no... it loads the .env into the the OS environment  and if you sent...
> The load_dotenv() function preserves existing OS environment variables by design, which explains why your .env file modifications aren't being applied. they are preserved because the "override" parameter is False...
@JoedNgangmeni you got it working already? here's my llm_func that works on both o3-mini and gpt-4o-mini: ``` async def llm_model_func( prompt, system_prompt=None, history_messages = [], keyword_extraction=False, **kwargs ) -> str:...