Eyas Taifour
Eyas Taifour
Thanks for the conversation - I need python 3.7 as well (I use vscode with devcontainers, and I really want to use the vscode debug functionality which requires python 3.7+)....
Pretty much same error, but not with az ml, instead with `az extension add -n bastion`
hi @c0derm4n - yes, that's the correct syntax for GRAPHRAG_API_BASE (See Josh's reply above). The error you shared seem to indicate that you are reaching a rate limit. You can...
A 404 error indicates that you are using the wrong `api_version`. I used gpt4-o in my test as well and I confirm that I succeeded with `2024-02-15-preview`. You should uncomment...
When an indexing job fails, I find many logs and artefacts on the blob storage that I configured during the installation. I browse to `blob_containers\{index_name}\reports` and find a JSON file...
Hi @chiara89, I think you're right -it seems the error is caused by `az cli` - here's another thread that mentions it, as well as a resolution: https://github.com/Azure/azure-cli/issues/21409 I just...
it seems this problem only occurs for `llm.tokens_per_minutes` but not for `embeddings.llm.tokens_per_minutes` - the setting of the latter is properly reflected in the indexlog.
It seems that no matter what I enter, the value that is leveraged by the index engine is the hardcoded one in `configs\defaults.py` in `LLM_TOKENS_PER_MINUTE`. To prove so, I commented...
thanks for the tip - but it seemed strange because the same key is used for `llm.embeddings.tokens_per_minute` and that seems to be working fine.
I believe so. GraphRAG makes calls to an OpenAI or AzureOpenAI LLM, and they support Japanese in their tokenizers. Keep in mind that the instructions (and examples) provided to the...