terraform-provider-cloudflare
terraform-provider-cloudflare copied to clipboard
`cloudflare_teams_list` with more than 50 items does not work
Confirmation
- [X] My issue isn't already found on the issue tracker.
- [X] I have replicated my issue using the latest version of the provider and it is still present.
Terraform and Cloudflare provider version
Terraform v1.1.4
on linux_amd64
+ provider registry.terraform.io/cloudflare/cloudflare v3.8.0
+ provider registry.terraform.io/hashicorp/external v2.2.0
Affected resource(s)
- cloudflare_teams_list
Terraform configuration files
resource "cloudflare_teams_list" "bad_domains" {
account_id = local.account_id
name = "bad domains"
type = "DOMAIN"
items = local.bad_domains
}
Debug output
n/a
Panic output
No response
Expected output
The list is correctly created, but subsequent plans always return diffs if there are more than 50 items in the list. If you try to apply the diff, it regularly thinks it's removing a bunch of items but also adding others, which end up being conflicting entries.
This is actually an API limitation in that it doesn't properly do paging and, by default, only returns 50 items. I discovered after tracing the teams UI that it does, however, support an undocumented ?limit=
parameter. I manually tested the API with ?limit=100000
and it returned the 1240 items in my list just fine. It doesn't seem there's a limit to the size of the limit parameter.
I don't know if the provider here needs to be updated or if the cloudflare-go
library should just always use a large limit parameter to get the entire list (since there's no paging or way to know if your list has been truncated by the API).
Actual output
n/a
Steps to reproduce
- create a
cloudflare_teams_list
with more than 50 items - apply (works)
- plan (always shows diffs)
Additional factoids
No response
References
No response
FWIW, the docs on teams lists says:
Your lists can include up to 5,000 entries for Enterprise subscriptions and 1,000 for Standard subscriptions.
FYI, I opened support ticket 2431495 regarding this and they claim there's already a backend gateway engineering ticket to handle this, so there's likely nothing to do here on the terraform side other than wait for the API to actually get fixed. If others who run into this could also push on this to help accelerate it, that'd be great. I've discussed this issue with the engineering side directly multiple times over the past few months. It's unfortunate this bug hasn't been addressed yet.
Given they're choosing to fix the API by adding proper pagination support (thereby limiting the number of items returned by default), I would argue that for the terraform provider, that's always going to want to get the entire list, it should use the largest page size available. Interestingly, it seems they chose to limit per_page
to 1000 max but you can still specify limit
of any size and get the entire list (>1000 items) in one shot. I suppose 1000 at a time is better than the default of 50.
Any news on the issue? It's basically blocking us from moving to terraform managed config.
FYI, I'm working around the issue currently by simply replacing my list each time it changes. Not ideal, but for us the one list we have rarely changes. We're using this to manage a list of domains we want to block in a DNS policy.
locals {
bad_domains_sha = filebase64sha256("./bad_domains.csv")
}
resource "random_pet" "bad_domains" {
keepers = {
sha = local.bad_domains_sha
}
}
resource "cloudflare_teams_list" "bad_domains" {
account_id = local.account_id
name = "bad domains - ${random_pet.bad_domains.id}"
type = "DOMAIN"
items = local.bad_domains
lifecycle {
ignore_changes = [items]
replace_triggered_by = [random_pet.bad_domains.id]
create_before_destroy = true
}
}
This functionality has been released in v3.27.0 of the Terraform Cloudflare Provider.
Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!