rover
rover copied to clipboard
[Feature Request] Terraform Cloud integration
Integrate with Terraform Cloud. When you provide rover TFC credentials, rover should be able to pull the latest plan and generate a visualization
Would be nice if we could create a GitHub action with this integration
Initial code on tfc-integration branch can take Terraform plan as a JSON, just need to implement pull from the Terraform Cloud API.
I tested it locally using go run . -planJSONPath=plan.json -tfConfigExists=false and verified it worked. To generate JSON configuration in local configuration, first, generate the planfile:
terraform plan -out=plan.out
then, write the planfile into a JSON:
terraform show -json plan.out > plan.json
Implementing the TFC portion seems pretty straight forward based on the API docs... should we expect rover to generate a new run if a plan doesn't already exist? That seems like it could lead to unintended consequences... π€
I'll implement it without triggering a new run if one doesn't already exist for now, based on feedback, I might add another flag that creates a new run if one doesn't exist
I agree, especially since TFC could queue runs together.
Love the first MVP of this though, being able to pass that JSON file to generate the rover output is a huge win!
Just integrated TFC and tested it out π
This requires you set your Terraform Cloud token to the TFC_TOKEN environment variable. Afterwards, there are three new flags.
-tfcOrgdefines your Terraform Cloud organization (required for TFC)-tfcWorkspacedefines your Terraform Cloud workspace (required for TFC)-tfcNewRunspecifies whether you want rover to generate a new run in the specified workspace
You must set the -tfConfigExists flag to false since rover only has access to the plan file (no direct access to the configuration file). If
If you do not set -tfcNewRun to true (default: false), rover will retrieve the latest run and generate a visualization based on that. As a result, your visualization may be slightly outdate. If you set-tfcNewRun to true, rover will check if you have the latest run requires an action (apply/discard), if the latest run requires an action, rover will error out and not generate a new run (since it'll default to pending). There's a 5 min timeout for new runs, but will prob add a variable so users can toggle this.
I've tested a couple times and it seems like it works well. There's a slight hiccup when rover generates a new run, since it takes time for the plan JSON to be available to the user. I hardcoded a 10 second pause and it seems to work well so far, but need to play with it more to make it more robust.
With new run:
go run . -tfcOrg=hashicorp-training -tfcWorkspace=tf-random -tfcNewRun=true -tfConfigExists=false
2021/11/03 23:54:16 Starting Rover...
2021/11/03 23:54:18 Starting new Terraform Cloud run in tf-random workspace...
2021/11/03 23:54:28 Run run-VPKfkxje5eWd7xcU to completed!
2021/11/03 23:54:28 Generating resource overview...
2021/11/03 23:54:28 Generating resource map...
2021/11/03 23:54:28 Generating resource graph...
2021/11/03 23:54:28 Done generating assets.
2021/11/03 23:54:28 Rover is running on 0.0.0.0:9000
Without new run:
go run . -tfcOrg=hashicorp-training -tfcWorkspace=tf-random -tfConfigExists=false
2021/11/03 23:54:16 Starting Rover...
2021/11/03 23:54:18 Starting new Terraform Cloud run in tf-random workspace...
2021/11/03 23:54:28 Run run-VPKfkxje5eWd7xcU to completed!
2021/11/03 23:54:28 Generating resource overview...
2021/11/03 23:54:28 Generating resource map...
2021/11/03 23:54:28 Generating resource graph...
2021/11/03 23:54:28 Done generating assets.
2021/11/03 23:54:28 Rover is running on 0.0.0.0:9000
I'm planning on looking at https://github.com/im2nguyen/rover/issues/46 later this week and determine if it's feasible to attach it in this release.
If it is, I'll aim for a new release with both features the following week. Otherwise, I'll cut a release with the TFC integration only so folks can use it immediately π
This looks awesome, the only thing I can think is TFC uses terraform plan as a speculative run. I don't believe speculative runs appear under workspaces/:ws/runs. So this would only be used under an apply circumstance or if you want to view current applied run. I believe this "could" introduce a race condition if the user decides they don't want to do a -tfcNewRun, more likely if the auto apply isn't set to true.
reference branch tfc-integration/main.go#L288.
Workflow
In theory user A could initiate a terraform apply which would render:
run.Items[0] // User A run_id as run-123
and then sit to wait for confirmation. Then while waiting for confirmation or immediately following the first terraform apply user B will initiate their run which renders:
run.Items[0] // User B run_id as run-456
Rover result
At that same moment it's created user A would run rover and get user B's run_id:
run.Items[0] // User A run_id as run-456
Let me know if this doesn't quite make sense and I can include a mock test I ran.
Probably a very, very rare circumstance, but could still possibly happen and something to think about π as the feature is developed or for later iterations to enhance durability .
Edit: this will be extremely unlikely due to the console waiting for completion. This would really only occur in the following circumstances (AFAIK):
- In a local circumstance using a split shell session.
VCSworkflow in conjunction with local development.- First run gets applied right as the second is taking place and the first pipeline calls
rover.
Once again, this would really only work for terraform apply so it's probably a very rare use case, you could probably just disregard the possible race condition due to low occurrenceπ€· .
Yep, that makes sense. That was the issue I was running into earlier today when testing too -- thank you for articulating and explaining it so clearly. I'm going to think about a better solution for this in the next couple days, the only one I can think of is iterating through the runs until it discovers one with a plan
It's not an elegant solution, but I think if we do that, plus append the run ID to the workspace name (so it appears in the visualization), it'll help address the workflow you raised π
One possible solution -- you could make it so if someone uses -tfcNewRun it just uses that, otherwise let someone supply something like -tfcRunId. That way you leave it up to the issuer what run they're looking to visualize. Very similar to the -planPath and -planJSONPath, except you do some of the heavy lifting with the TFC API.
Definitely interested in seeing what you come up with!!
I think -tfcRunId sounds like a great solution too, the only thing is that specifying the runID breaks the flow (I think the only "easy" way to get the run ID is to go to the workspace's Run page, which kinda breaks the flow). It makes for a great escape hatch tho also should be pretty straightforward to implement.
Currently if someone uses -tfcNewRun, it'll use that runID once the plan completes π
Gonna aim for a release next week. My personal laptop died, so dropped it of for repairs π¬
I would love to see this feature officially get added in. In the meantime I'm just putting here that I'm running into an issue:
$ docker run -e TFC_TOKEN --rm -it -p 9000:9000 -v $(pwd):/src im2nguyen/rover -tfcOrg=myOrg -tfcWorkspace=myWorkspace -tfcNewRun=true
2022/04/06 04:32:25 Starting Rover...
2022/04/06 04:32:26 Unable to parse Plan: Did not create new run. run-<runid> in myWorkspace in myOrg is still active
The run is not active. Possibly more helpful details:

TIA for any help!
Just to note I did get it to work using this command.
docker run -e TFC_TOKEN --rm -it -p 9000:9000 -v $(pwd):/src im2nguyen/rover -tfcOrg= myOrg -tfcWorkspace= myWorkspace
Two things about this:
- I think our graph is pretty complex and it locked up my browser for quite a while. And then as soon as I started zooming out I lost the whole thing and didn't find a way to "reset". So, loading and then having to start over was fairly painful.
- The way I understood your previous comment about
-tfcNewRunI think meant I should only be able to inspect the current state and not evaluate a new plan. However, in the results I did see different items under "Proposed State" but I wonder if that was just an artifact of my last apply having an error where a plan was supposed to update resources but the apply step failed.