HELM-Chart - Ruler throws "no org id", "no valid org id found" and "multiple org IDs present" when Grafana tries to use the alertmanager
Describe the bug
I've tried to configure Grafana Alerting with our Mimir distributed installation but Grafana logs errors.

That's how I've configured the grafana source:

The ruler container throws the following errors:
level=error ts=2023-01-12T10:06:51.168675231Z caller=api.go:150 user=anonymous|kfh|kpark|shp msg="error extracting org id from context" err="multiple org IDs present"
level=warn ts=2023-01-12T10:06:51.168741234Z caller=logging.go:86 traceID=357ccf4ca844423d msg="GET /prometheus/api/v1/rules (500) 113.405µs Response: \"{\\\"status\\\":\\\"error\\\",\\\"data\\\":null,\\\"errorType\\\":\\\"server_error\\\",\\\"error\\\":\\\"no valid org id found\\\"}\" ws: false; Accept-Encoding: gzip; Connection: close; User-Agent: Grafana/9.3.1; X-Scope-Orgid: redacted|redacted|redacted|anonymous; "
level=warn ts=2023-01-12T10:06:51.16886754Z caller=logging.go:86 traceID=54c8e255a91f0b85 msg="GET /prometheus/config/v1/rules?subtype=mimir (500) 67.403µs Response: \"{\\\"status\\\":\\\"error\\\",\\\"data\\\":null,\\\"errorType\\\":\\\"server_error\\\",\\\"error\\\":\\\"no org id\\\"}\" ws: false; Accept-Encoding: gzip; Connection: close; User-Agent: Grafana/9.3.1; X-Scope-Orgid: redacted|redacted|redacted|anonymous; "
level=error ts=2023-01-12T10:07:01.560425467Z caller=api.go:150 user=anonymous|kfh|kpark|shp msg="error extracting org id from context" err="multiple org IDs present"
level=warn ts=2023-01-12T10:07:01.560563373Z caller=logging.go:86 traceID=4f0499777fe9ea72 msg="GET /prometheus/api/v1/rules (500) 181.708µs Response: \"{\\\"status\\\":\\\"error\\\",\\\"data\\\":null,\\\"errorType\\\":\\\"server_error\\\",\\\"error\\\":\\\"no valid org id found\\\"}\" ws: false; Accept-Encoding: gzip; Connection: close; User-Agent: Grafana/9.3.1; X-Scope-Orgid: redacted|redacted|redacted|anonymous; "
level=warn ts=2023-01-12T10:07:01.562150943Z caller=logging.go:86 traceID=6c39b8fa4e96ce67 msg="GET /prometheus/config/v1/rules?subtype=mimir (500) 70.903µs Response: \"{\\\"status\\\":\\\"error\\\",\\\"data\\\":null,\\\"errorType\\\":\\\"server_error\\\",\\\"error\\\":\\\"no org id\\\"}\" ws: false; Accept-Encoding: gzip; Connection: close; User-Agent: Grafana/9.3.1; X-Scope-Orgid: redacted|redacted|redacted|anonymous; "
level=error ts=2023-01-12T10:07:11.287095995Z caller=api.go:150 user=anonymous|kfh|kpark|shp msg="error extracting org id from context" err="multiple org IDs present"
level=warn ts=2023-01-12T10:07:11.287152498Z caller=logging.go:86 traceID=1e080e50ae61c967 msg="GET /prometheus/api/v1/rules (500) 104.005µs Response: \"{\\\"status\\\":\\\"error\\\",\\\"data\\\":null,\\\"errorType\\\":\\\"server_error\\\",\\\"error\\\":\\\"no valid org id found\\\"}\" ws: false; Accept-Encoding: gzip; Connection: close; User-Agent: Grafana/9.3.1; X-Scope-Orgid: redacted|redacted|redacted|anonymous; "
level=warn ts=2023-01-12T10:07:11.296248898Z caller=logging.go:86 traceID=40073e92b304bc07 msg="GET /prometheus/config/v1/rules?subtype=mimir (500) 76.004µs Response: \"{\\\"status\\\":\\\"error\\\",\\\"data\\\":null,\\\"errorType\\\":\\\"server_error\\\",\\\"error\\\":\\\"no org id\\\"}\" ws: false; Accept-Encoding: gzip; Connection: close; User-Agent: Grafana/9.3.1; X-Scope-Orgid: redacted|redacted|redacted|anonymous; "
To Reproduce
Steps to reproduce the behavior:
- Start Mimir in Multi-Tenancy Mode
- Let Grafana try to connect against it
Expected behavior
No errors in Grafana Alerting.
Environment
- Infrastructure: Kubernetes, 1.22.7, AKS-Engine on an Azure Stack Hub
- Deployment tool: helm
Additional Context
Structured Config from Helm Values:

The ruler API endpoints don't support tenant federation. Tenant federation is currently supported only be query API endpoints. Please try to specify only 1 single tenant in the datasource used to lookup rules in Grafana Alerting.
Interested too ! Do you plan to implement this feature in ruler API ?
We have observed that the error disapears (from UI, not from logs/metrics) if we disable "Manage alerts via Alerting UI"