terraform icon indicating copy to clipboard operation
terraform copied to clipboard

A way to restrict which files a Terraform configuration can access

Open idangur opened this issue 4 years ago • 5 comments

Current Terraform Version

All terraform versions

Use-cases

I'm currently using and running Terraform using Atlantis (https://www.runatlantis.io/), And there is currently one main security gap that we have that any one with access to the repo can basically LFI (https://en.wikipedia.org/wiki/File_inclusion_vulnerability) all of the files on the system that is running Atlantis and basically get access to all of the credential files on it and other security sensitive file contents on the pod.

Attempted Solutions

We thought about using native pod or linux file permission to block those file functions from reaching sensitive contents, but one of the main issues with this approach is that the Atlantis user is a high permissions user and also needs access to some of those files (mainly the credentials file for the use of providers)

Proposal

One of the proposals that we thought of is including as part of the configuration a whitelist/blacklist path variable that whenever one of the file functions is called it checks that given file path against the whitelist/blacklist path's and verify that the operation is allowed or else blocks it and returns an error message of file not found (as to deter attackers from enumerating all of the files on the system that are blocked this way)

References

Overview of how Atlantis works - https://www.runatlantis.io/docs/how-atlantis-works.html Atlantis known security issues - https://www.runatlantis.io/docs/security.html

idangur avatar Sep 29 '21 11:09 idangur

Hi @idangur! Thanks for sharing this use-case.

I've changed the summary of this issue because you framed it as being about the file function in particular, but then the use-case you described was about access to sensitive files in general and so that use-case wouldn't really be solved unless the solution also covered all of the other ways that a Terraform configuration can access files, which includes:

  • Using any of the other functions that read files from disk, which includes templatefile and filebase64.
  • Using the hashicorp/local provider's local_file data source to read a file.
  • Using various existing provider features that read files from disk, such as aws_s3_bucket_object's ability to upload a disk file directly into an S3 bucket.
  • Using the hashicorp/external provider's external data source to run shell commands that access files.
  • Writing a custom provider (which is arbitrary code running with the same privileges as Terraform) that uses normal Linux system calls to access files, outside of Terraform's supervision.

As I'm sure you can see from my examples above, a Terraform configuration is essentially just regular software, despite parts of it being written in a domain-specific language, and so giving someone access to run an arbitrary Terraform configuration on your system is largely the same as giving them access to run any other arbitrary software they might want to upload: they can make that system do anything that the executing process has access to do.

I think the problem you've described might, unfortunately, be fundamentally unsolvable: you want to both give this code access to sensitive resources and prevent the code from accessing the same resources. As you noted, Terraform and/or Terraform providers need access to these credentials in order to do their work, and Terraform already gives those folks broad access to do all sorts of things with those credentials even if they don't extract the credentials directly themselves.

The most common answer to this today is to restrict who has access to submit arbitrary code to be run my Terraform in your privileged environment, and trust that those people will use their privileges only within the allowed bounds. I am, of course, now just repeating the advice in the Atlantis documentation.

I'm going to leave this open for now because it's an interesting use-case to think about and there might be some way to solve it in the long run, through techniques such as running different parts of Terraform in separate processes with different privileges, but I also want to be up front with you that the current situation is unlikely to change for the foreseeable future because this represents a significant change in system architecture and is not an area we have any current plans to work on, and so you should design your use of Terraform around the current design, taking into account the advice in the Atlantis documentation.

Thanks again!

apparentlymart avatar Sep 29 '21 16:09 apparentlymart

Hi @apparentlymart,

I see and understand the issue at hand here, but what I'm wondering is if it isn't possible to create a base function that will validate the given path and that all file based functions in terraform has to use?

In the other means you mentioned on ways to access local files we have already covered most of the cases with special policy checks, i.e. blocking the use of hashicorp/external, hashicorp/local and also all providers need to be approved or else they will be blocked before the plan is ran.

My main issue here is that with Atlantis there is a validation check that any user that opens a PR needs to be approved by a maintainer for the plan to be applied, and all of the inner file functions in terraform can bypass those checks as they return a result in the plan phase and you can simply output them into a variable and print them, which most of the external approved providers that we use only operate and work after an apply and that makes them way less dangerous as the PR needs to be approved beforehand.

Thanks for your time and for writing such a thorough answer!

idangur avatar Sep 30 '21 13:09 idangur

Old thread, sorry to necro it. @idangur. FWIW, I had exactly the same issue and managed to get pretty far by simply patching internal/lang/funcs/filesystem.go's openFile here to compare the file it wants to open against a whitelist (I wrote a little golang module that reads an HCL file with said whitelisted filesystem glob patterns). From the testing I did, all local file system reads end up hitting this function and so gating it with a whiltelist there immediately causes atlantis plans to barf if non-whitelisted paths are attempted to be read.

The same thing appears to be possible with providers by gating them here

coderigo avatar Sep 05 '24 06:09 coderigo

Note that there are various providers that allow accessing the filesystem in one way or another, and so solving this only with a policy for which providers are available is likely to be tricky.

For example, hashicorp/aws offers aws_s3_object which allows uploading anything from the local filesystem to any S3 bucket that the active AWS credentials can write to. Various other providers have similar functionality for uploading arbitrary files into remote storage locations.

I expect you would need a more fine-grain policy to get the constraints you want while still allowing use of provider features you actually need.

apparentlymart avatar Sep 06 '24 16:09 apparentlymart

Yeah definitely. In our case the combination of local filesystem whitelisting and provider whitelist puts as good a controlled barrier as we can on local filesystem access. Whitelisting providers in an atlantis service is done very obviously so that allowing hashicorp/aws*/** is likely ok compared to whitelisting say, randomdeveloper/aws-provider. Not a perfect solution for sure, just an extra layer in the direction of one.

coderigo avatar Sep 08 '24 01:09 coderigo