terraform-provider-bigip
terraform-provider-bigip copied to clipboard
manage 'n' BIG-IPs
The provider currently requires using provider aliases for environments with more than one BIG-IP.
For those with larger enterprise environments, the provider should work for an arbitrary number of BIG-IPs, using count or so similar indexing mechanism.
This would likely be a breaking change to the provider interface.
Hi,
Count is not supported for providers. There's been quite a few threads on hashicorp related to this. This is the latest AFAIK: https://github.com/hashicorp/terraform/issues/24476#issuecomment-700368878
Summary from the previous link:
I've shared the above mainly to just show some initial design work that happened for this family of features. However, I do have to be honest and share some unfortunate news: the focus of our work is now shifting towards stabilizing Terraform's current featureset (with minor modifications where necessary) in preparation for a Terraform 1.0, and a mechanism like the one I described above would be too disruptive to Terraform's internal design to arrive before that point. The practical upshot of this is that further work on this feature couldn't begin until at least after Terraform 1.0 is released. Being realistic about what other work likely confronts us even after the 1.0 release, I'm going to hazard a guess that it will be at least a year before we'd be able to begin detailed design and implementation work for features in this family. I understand that this is not happy news: I want this feature at least as much as you all do, but with finite resources and conflicting priorities we must unfortunately make some hard tradeoffs. I strongly believe that there is a technical design to address the use-cases discussed here, but I also want to be candid with you all about the timeline so that you can set your expectations accordingly.
So it seems like this is not something we will be able to facilitate in the short term
I imagined an architectural shift in which the provider did not authenticate to a BIG-IP. Instead, there is a BIG-IP resource which would be capable of using count. The id of the BIG-IP resource would be passed to AS3, DO, and other resources. The connections to the BIG-IPs for each of the resources could be handled via a provider singleton or per BIG-IP.
Hi
The pattern we are using is assigning alias to our providers. We can then handle multiple connections for same provider.
Example
module "app1" {
source = "../modules/f5_web_vip_and_pool"
providers = {
bigip = bigip.bigip1
}
variable1 = "value1"
}
provider "bigip" {
alias = "bigip1"
address = "bigip1.domain.internal"
username = "admin"
password = "password"
}
Thanks very much for your code sample @ehlomarcus.
The use case I'm considering is when the number of BIG-IPs is arbitrarily large and their attributes are not known beforehand. As I understand it, using the provider alias approach requires knowing the number of BIG-IPs beforehand and that there is a provider
stanza for each BIG-IP. For a relatively static environment of a handful of BIG-IPs, this is likely very manageable. For dynamic environments with a large number of BIG-IPs (dozens, hundreds) the Terraform code becomes unmanageable.
@mjmenger
I'm not that familiar with all options available for managing BIG IP. But I think the use case you describe is more fit for BIG IQ management. Which is something we also have tested.
BIG IQ did not have terraform support before, it might have changed. But using it you need to manage all with AS3.
AS3 has a rather steep learning curve, but is pretty ok when you get used to it.
Dream scenario would be: terraform native AS3 resources. That way you would not need to handle static json files or templates.
I have done some testing with the TF F5 BIG-IP provider, I found that it did have some limitation on scaling part, as I was talking to Mark and we agree on having this options possible, since MS still have that requirement. Thanks
have you guys considered using terraform workspaces ? just a thought
Hi, closing this request now. Please re-open if required or send an email to [email protected]. Thanks!