[BUG] Extracting the new largeLanguageModel diagnostic setting not extracting
Release version
v6.0.1.9
Describe the bug
When I update the AzureMonitor setting to enable the new Log LLM Messages in either the API or All APIs, it's not extracting this new value.
Expected behavior
The extractor should extract all of the json from the Diagnostic management api and update the diagnostic.json file. Here is the output from the ARM API:
{
"id": "<id>",
"type": "Microsoft.ApiManagement/service/apis/diagnostics",
"name": "azuremonitor",
"properties": {
"alwaysLog": "allErrors",
"verbosity": "information",
"logClientIp": true,
"loggerId": "<logger-id>",
"sampling": {
"samplingType": "fixed",
"percentage": 100
},
"frontend": {
"request": {
"headers": [
"UseCaseId"
],
"body": {
"bytes": 8192,
"sampling": null
}
},
"response": {
"headers": [
"UseCaseId"
],
"body": {
"bytes": 8192,
"sampling": null
}
}
},
"backend": {
"request": {
"headers": [
"UseCaseId"
],
"body": {
"bytes": 8192,
"sampling": null
}
},
"response": {
"headers": [
"UseCaseId"
],
"body": {
"bytes": 8192,
"sampling": null
}
}
},
"largeLanguageModel": {
"logs": "enabled",
"requests": {
"messages": "all",
"maxSizeInBytes": 32768
},
"responses": {
"messages": "all",
"maxSizeInBytes": 32768
}
},
"tags": null
}
}
Actual behavior
This is what is actually saved in the diagnostic.json file:
{
"properties": {
"loggerId": "/subscriptions/***/resourceGroups/apiopsdemo-rg-dev/providers/Microsoft.ApiManagement/service/apim-dev/loggers/azuremonitor",
"alwaysLog": "allErrors",
"backend": {
"request": {
"body": {
"bytes": 0
},
"headers": []
},
"response": {
"body": {
"bytes": 0
},
"headers": []
}
},
"frontend": {
"request": {
"body": {
"bytes": 0
},
"headers": []
},
"response": {
"body": {
"bytes": 0
},
"headers": []
}
},
"logClientIp": true,
"sampling": {
"percentage": 100,
"samplingType": "fixed"
},
"verbosity": "information"
}
}
Reproduction Steps
Update the LLM log message section under Settings like this:
Run the extractor and see it doesn't save the largeLanguageModel properties.
Thank you for opening this issue! Please be patient while we will look into it and get back to you as this is an open source project. In the meantime make sure you take a look at the [closed issues](https://github.com/Azure/apiops/issues?q=is%3Aissue+is%3Aclosed) in case your question has already been answered. Don't forget to provide any additional information if needed (e.g. scrubbed logs, detailed feature requests,etc.).
Whenever it's feasible, please don't hesitate to send a Pull Request (PR) our way. We'd greatly appreciate it, and we'll gladly assess and incorporate your changes.
LLM diagnostics are only available in a preview version (2025-03-01-preview) of the APIM REST API. The latest GA version (2024-05-01) doesn't have this. Once it's GA, we will add support for it.
As a workaround, you can update the publisher configuration as follows:
- Set the API version to
2025-03-01-preview.
...
ARM_API_VERSION: "2025-03-01-preview"
...
- Add LLM properties for the specific API diagnostic. When the publisher runs, it will take the extracted JSON, merge it with the updates in the publisher configuration, and send the merged JSON to the APIM REST API.
...
apis:
- name: demo-conference-api
diagnostics:
- name: applicationinsights
properties:
largeLanguageModel:
logs:
...
Hi @guythetechie,
First of all, thank you for your detailed comment and guidance on enabling LLM diagnostics using the preview API version. It was very helpful.
I tried applying your suggestion and wrote the configuration as follows:
apimServiceName: MY_APIM_INSTANCE_NAME
ARM_API_VERSION: 2025-03-01-preview
apis:
- name: MY_LLM_API
diagnostics:
- name: azuremonitor
properties:
largeLanguageModel:
logs: enabled
requests:
messages: all
maxSizeInBytes: 32768
responses:
messages: all
maxSizeInBytes: 32768
However, when I attempted to run this through GitHub Actions, it didn’t work as expected.
My questions are:
- What possible reasons could cause this configuration to fail when executed via GitHub Actions?
- Specifically, does the Publisher currently support yaml overrides for settings that are not yet available in its implementation? In other words, if a property exists in the configuration file but is not supported in Publisher, will it still be passed through to APIM, or is this a limitation?
Any insights or guidance would be greatly appreciated!
You're welcome, @suzuki-shm - happy to help. Your changes look correct to me. The publisher doesn't use the DTOs defined in the common project. It reads the JSON from disk, merges it with configuration values (if defined), and passes the result to the REST API.
The easiest way to diagnose your issue is to enable trace logging on the publisher. If you're using GitHub actions, you can set your yaml to something like this:
...
- name: Run publisher
env:
LOGGING__LOGLEVEL__DEFAULT: Trace
...
As noted in the Wiki, trace logging will log the URLs and the contents of PUT requests. You can see what's being passed to the APIM REST API when publishing that API diagnostic and compare it to the documentation.
Hi @guythetechie,
Thanks for the helpful details, much appreciated.
I enabled Trace and inspected the payload. As expected, for content types that aren’t supported by the publisher, the override/merge isn’t effective, and those items aren’t added to the payload. Although the logs show that sampling has been updated from 90 to 99 as configured in configuration.prod.yaml, largeLanguageModel is not included in the payload. This indicates that items not supported by the Publisher cannot be merged or overridden through the configuration.
It seems this might be a separate issue from the current one, so I’m thinking about creating a new issue for it. What are your thoughts?
apis/MY_API_NAME/diagnostics/azuremonitor/diagnosticInformation.json
{
"properties": {
"loggerId": "/subscriptions/MY_SUBSCRIPTION/resourceGroups/MY_RESOURCE_GROUP/providers/Microsoft.ApiManagement/service/MY_APIM_INSTANCE_NAME/loggers/azuremonitor",
"alwaysLog": "allErrors",
"backend": {
"request": {
"body": {
"bytes": 0
},
"headers": [
"Email"
]
},
"response": {
"body": {
"bytes": 0
},
"headers": [
"Email"
]
}
},
"frontend": {
"request": {
"body": {
"bytes": 0
},
"headers": [
"Email"
]
},
"response": {
"body": {
"bytes": 0
},
"headers": [
"Email"
]
}
},
"largeLanguageModel": {
"logs": "enabled",
"requests": {
"messages": "all",
"maxSizeInBytes": 32767
},
"responses": {
"messages": "all",
"maxSizeInBytes": 32767
}
},
"logClientIp": true,
"sampling": {
"percentage": 90,
"samplingType": "fixed"
},
"verbosity": "information"
}
}
configuration.prod.yaml
apimServiceName: MY_APIM_INSTANCE_NAME
ARM_API_VERSION: 2025-03-01-preview
apis:
- name: MY_API_NAME
diagnostics:
- name: azuremonitor
properties:
sampling:
percentage: 99
largeLanguageModel:
logs: enabled
requests:
messages: all
maxSizeInBytes: 32768
responses:
messages: all
maxSizeInBytes: 32768
github actions log
...
trce: HttpPipeline[0]
Starting request
Method: PUT
Uri: https://management.azure.com/subscriptions/***/resourceGroups/***/providers/Microsoft.ApiManagement/service/***/apis/MY_API_NAME/diagnostics/azuremonitor?api-version=2025-03-01-preview
Content: {"properties":{"loggerId":"/subscriptions/***/resourceGroups/***/providers/Microsoft.ApiManagement/service/***/loggers/azuremonitor","alwaysLog":"allErrors","backend":{"request":{"body":{"bytes":0},"headers":["Email"]},"response":{"body":{"bytes":0},"headers":["Email"]}},"frontend":{"request":{"body":{"bytes":0},"headers":["Email"]},"response":{"body":{"bytes":0},"headers":["Email"]}},"logClientIp":true,"sampling":{"percentage":99,"samplingType":"fixed"},"verbosity":"information"}}
Thanks for testing this, @suzuki-shm. I apologize, I was wrong.
The v6 version of ApiOps does what you're experiencing; it converts the JSON to the DTO, which is why your LLM properties get stripped.
The v7 version changed this behavior; we read the file bytes, convert those bytes to a JSON object, merge with configuration, and then put the resulting object. I've been working on the v7 branch for a while and didn't realize this behavior wasn't in v6.
Would you mind trying using the latest v7 alpha release? If this works, then we have our workaround going forward. I'll try to backport this improvement in v6 when I have some time.
@guythetechie Thank you so much for your comment! It worked perfectly. I’m really looking forward to the GA release of Version 7!