ResourceModules icon indicating copy to clipboard operation
ResourceModules copied to clipboard

[Bug Report]: Azure Cosmos DB database level shared throughput not respected when creating containers

Open markdebruijne opened this issue 1 year ago • 2 comments

Describe the bug

When you have configured Throughput/Autoscale on a database level you do so because then containers can use that capacity together.

When using the Azure Portal > Data Explorer blade > Create container > existing database (with throughput configured), by default it creates the container with respecting that shared throughput. You can override it and specify throughput (or autoscale) on that container explicitly if you want to.

Versions

Templates pulled from this repository Monday May 8th 2023
Bicep CLI version 0.17.1
"azure-cli": "2.46.0"

To reproduce

If I use the /Microsoft.DocumentDB/databaseAccounts/main.bicep with the input below - note that I don't specify capacity on container level - then the container is being created via \Microsoft.DocumentDB\databaseAccounts\sqlDatabases\containers\main.bicep with explicit throughput of 400 (default value).

  params: {
    sqlDatabases: [
      {
        name: 'mydatabase'

        autoscaleSettingsMaxThroughput: 1000 

        containers: [
          {
            name: 'mycontainer'
            kind: 'Hash' // type of partition key
            paths: [ // partition keys
              '/id'
            ]
          }]
      }]
}

I think introduced recently and today still applicable in this piece of code

  • If you don't specify autoscaleSettingsMaxThroughput it falls back to throughput
  • throughput as null seems to lead to 400 as default value (?)
  • Specifying either autoscaleSettingsMaxThroughput or throughput leads to container level explicit capacity what we don't want.

When I comment out [line 91 - 95 from containers/main.bicep[(https://github.com/Azure/ResourceModules/blob/0f05a13f030bd501c2eec80460d791b550cd04eb/modules/DocumentDB/databaseAccounts/sqlDatabases/containers/main.bicep#L90) and don't specify autoscaleSettingsMaxThroughput nor throughput, it works as intended. The Azure Portal (Cost management blade) then reports "Shared within mydatabase" as configured Throughput RU/s)

With the throughput defined with a default value of 400 and autoscaleSettingsMaxThroughput with a default value of -1 there is no proper possibility to verify whether explicit capacity needs to be configured on container level. For backwards capability and more explicit configuration of the shared capacity feature, maybe introduce a bool parameter useDatabaseLevelThroughput which default is false. Adjust the template that when useDatabaseLevelThroughput == true, both the options.throughput and options.autoscaleSettings are omitted from the resulting deployment.

I've tried to specify throughput: -1 (just like autoscaleSettingsMaxThroughput is supporting) but that didn't seem to work. Perhaps supporting that might be a solution if we want to be defensive in introducing new parameters

Code snippet

No response

Relevant log output

No response

markdebruijne avatar May 10 '23 12:05 markdebruijne