generate/hcl: support module calls through `source` keyword
In this PR, we add the support for module calls. It supports local, remote and Terraform registry ones.
The function moduleInstall is called recursively in order to walk through the ModuleCalls of each module to feed a ManagedResource slice. It stores a trace of each installed element to avoid infinite loops and modules are cached into $XDG_CACHE directory.
Then it loops over the ManagedResource slice with the current behavior.
A few notes:
- I'd like to use in-memory FS rather than actual FS but go-getter needs to be updated consequently (https://github.com/hashicorp/go-getter/issues/83: I started to work on it btw)
- I still need to find a way to properly testing this :)
Output example using this stack: https://github.com/cycloid-community-catalog/stack-gke
$ ./inframap generate --hcl generate/testdata/stack-gke
strict digraph G {
"google_container_node_pool.pools"->"google_container_cluster.primary";
"null_resource.delete_default_kube_dns_configmap"->"google_container_cluster.primary";
"null_resource.delete_default_kube_dns_configmap"->"google_container_node_pool.pools";
"kubernetes_config_map.kube-dns-upstream-namservers"->"null_resource.delete_default_kube_dns_configmap";
"kubernetes_config_map.kube-dns-upstream-namservers"->"google_container_cluster.primary";
"kubernetes_config_map.kube-dns-upstream-namservers"->"google_container_node_pool.pools";
"kubernetes_config_map.kube-dns-upstream-nameservers-and-stub-domains"->"null_resource.delete_default_kube_dns_configmap";
"kubernetes_config_map.kube-dns-upstream-nameservers-and-stub-domains"->"google_container_cluster.primary";
"kubernetes_config_map.kube-dns-upstream-nameservers-and-stub-domains"->"google_container_node_pool.pools";
"kubernetes_config_map.ip-masq-agent"->"google_container_cluster.primary";
"kubernetes_config_map.ip-masq-agent"->"google_container_node_pool.pools";
"null_resource.wait_for_cluster"->"google_container_cluster.primary";
"null_resource.wait_for_cluster"->"google_container_node_pool.pools";
"google_service_account.cluster_service_account"->"random_string.cluster_service_account_suffix";
"kubernetes_config_map.kube-dns"->"null_resource.delete_default_kube_dns_configmap";
"kubernetes_config_map.kube-dns"->"google_container_cluster.primary";
"kubernetes_config_map.kube-dns"->"google_container_node_pool.pools";
"google_container_cluster.primary" [ shape=ellipse ];
"google_container_node_pool.pools" [ shape=ellipse ];
"google_service_account.cluster_service_account" [ shape=ellipse ];
"kubernetes_config_map.ip-masq-agent" [ shape=ellipse ];
"kubernetes_config_map.kube-dns" [ shape=ellipse ];
"kubernetes_config_map.kube-dns-upstream-nameservers-and-stub-domains" [ shape=ellipse ];
"kubernetes_config_map.kube-dns-upstream-namservers" [ shape=ellipse ];
"null_resource.delete_default_kube_dns_configmap" [ shape=ellipse ];
"null_resource.wait_for_cluster" [ shape=ellipse ];
"random_string.cluster_service_account_suffix" [ shape=ellipse ];
}
Closes: https://github.com/cycloidio/inframap/issues/56, https://github.com/cycloidio/inframap/issues/54
Ping @Thomas-lhuillier hehe.
What do I have to do? 😅
@Thomas-lhuillier I guess @xescugc wanted to ping me instead - but feel free to work on this PR if you want to. :see_no_evil:
Anyways, thanks for the ping ! Besides the comments to address, I guess there is not so much effort to provide to get this PR land into a new release.
Are you also still working on adding the afero.FS implemtation into the TF lib?
Nope. It was quite a mess to bring afero.FS support to the lib IIRC.
@Thomas-lhuillier I guess @xescugc wanted to ping me instead - but feel free to work on this PR if you want to. 🙈
Makes sense 👍 Well, this all seems out of reach to me, maybe after a couple months/years training :trollface:
Yes I think it was an autocomplete error from GH and I did not check haha the ping is for @tormath1 (ok if i do @th the first one is Thomas haha)