terraform-provider-hcloud icon indicating copy to clipboard operation
terraform-provider-hcloud copied to clipboard

Support for s3 object storage

Open KiaraGrouwstra opened this issue 1 year ago • 19 comments

What whould you like to see?

hetzner recently introduced their S3-compatible object storage, offering immutable storage cheaper than their regular shared volumes. it would be cool if this provider would facilitate configuring hetzner object storage as well, tho it seems given it's in beta there is currently still a manual step to request access involved as well.

KiaraGrouwstra avatar Oct 02 '24 18:10 KiaraGrouwstra

Hey @KiaraGrouwstra,

right now we do not plan to add support for the Object Storage S3 API in this terraform provider. You can use any S3-compatible provider like the Minio provider instead.

If using other providers does not work for you, could you explain the issues you have with them and the benefits you see with adding the APIs to this provider?

apricote avatar Oct 04 '24 08:10 apricote

i'll try that one - thank you for your response!

KiaraGrouwstra avatar Oct 04 '24 14:10 KiaraGrouwstra

i would have prefered to have all from one hand. it would also feel strange if i could create server, firewalls, ... via hetzner cli but not the object storage. also s3 user and secret management would be nice to have in this provider.

i would vote for reopening the issue

c33s avatar Oct 11 '24 17:10 c33s

As I understood, creating buckets and access keys must be done using Hetzner API. We already successfully configured existing buckets (object lifecycle ruels) using other existing providers, but it would be nice to be able to create the buckets using this provider (saves a manual step in the UI)

BerndDA avatar Oct 12 '24 04:10 BerndDA

Hello all 👋

All our integrations rely on the Hetzner Cloud public API, which is available with a certain level of stability. Since the features you are requesting are not in the public API, we cannot implement them.

Therefore, for the time being, we do not plan to support:

Note that only a subset of the Amazon S3 features are currently supported.

We will leave this ticket open to increase its visibility. If you have questions, reach out to us using the Support Center.

jooola avatar Nov 14 '24 13:11 jooola

please correct me if i am wong, as i assume that hcloud cli code is the core for the terrform provider, excuse the crosspost:

let us vote for https://github.com/hetznercloud/cli/issues/918 maybe this awesome hetzner developers :heart: get a bigger budget if we vote for the issue, which i see as voting for them (the hetzner developers).

cheers

c33s avatar Nov 15 '24 14:11 c33s

@apricote Just to let you know a bunch of resources are not supported by the minio terraform in combination with hetzner object storage. E.g. setting public acl on a bucket or create a lifecycle rule.

3deep5me avatar Jan 09 '25 13:01 3deep5me

@apricote Just to let you know a bunch of resources are not supported by the minio terraform in combination with hetzner object storage. E.g. setting public acl on a bucket or create a lifecycle rule.

Do you have some code example to show your use case? Have you tried the aws terraform provider?

jooola avatar Jan 09 '25 15:01 jooola

@jooola thanks for your response. I tried the aws provider but i was not able to change the region to something non aws-specific and had some issue with auth. If someone has a working config it would be great!

This (at least) does not work right now with Hetzner:

resource "minio_ilm_policy" "bucket-lifecycle-rules" {
  bucket = minio_s3_bucket.bucket.bucket

  rule {
    id         = "expire-7d"
    expiration = "7d"
  }
}

Creating a public bucket with the terraform example fails - Reddit

resource "minio_s3_bucket" "state_terraform_s3" {
  bucket = "state-terraform-s3"
  acl    = "public"
}

All the IAM stuff from minio doesn't work either.

3deep5me avatar Jan 09 '25 15:01 3deep5me

This leads me right now to do something like this 😢

resource "null_resource" "install_minio" {
  provisioner "local-exec" {
    command = <<EOT
      curl -o /usr/local/bin/mc https://dl.min.io/client/mc/release/linux-amd64/mc
      chmod +x /usr/local/bin/mc
    EOT
  }
}
# Import ILM rule using MinIO client
resource "null_resource" "import_lifecycle_rule" {
  provisioner "local-exec" {
    command = <<EOT
      echo '${jsonencode(var.bucket_lifecycle_rule)}' > expiry.json
      mc alias set myminio https://${var.hetzner_s3_fqdn} $MINIO_USER $MINIO_PASSWORD
      mc ilm rule import myminio/${minio_s3_bucket.bucket.bucket} < expiry.json
    EOT
  }
  depends_on = [null_resource.install_minio, minio_s3_bucket.bucket]
}

3deep5me avatar Jan 09 '25 15:01 3deep5me

Do you have some code example to show your use case?

The problem starts with the IAM stuff in MinIO. It's not possible to create a user in the first place.

eg.

resource "minio_iam_user" "some-user" {
  name = "some-custom-name"
}

It's not necessary for Hetzner to duplicate functionality into the hcloud Terraform provider. However, functionality that is distinct and cannot be achieved with third-party providers should be implemented.

In some comment here, it was mentioned that other tools can be used for different use cases, but no other methods of creating users (in general IAM) were stated.

Keisir avatar Jan 09 '25 18:01 Keisir

Another limitation is that you can not delete minio_s3_bucket_policy with the terraform minio provider. Only creation works

minio_s3_bucket_policy.access_control_to_bucket: Destroying... [id=hetzner-pls-782yasd]
╷
│ Error: [FATAL] error deleting bucket (hetzner-pls-782yasd): 200 OK
│
│
╵

Would be great to have at least a list which features are supported. Does some got the aws provider working?

3deep5me avatar Jan 17 '25 17:01 3deep5me

@3deep5me The following configuration should get yourself started using the aws terraform provider:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}

provider "aws" {
  skip_credentials_validation = true
  skip_metadata_api_check     = true
  skip_requesting_account_id  = true
  skip_region_validation      = true

  endpoints {
    s3 = "https://fsn1.your-objectstorage.com"
  }

  region = "fsn1"

  # Please checks the docs on how to store those credentials safely.
  access_key = "<YOUR-ACCESS-KEY>"
  secret_key = "<YOUR-SECRET-KEY>"
}

resource "aws_s3_bucket" "main" {
  bucket = "my-bucket-a9c8ae4e"
}

resource "aws_s3_bucket_acl" "main" {
  bucket = aws_s3_bucket.main.id
  acl    = "private"
}

resource "aws_s3_bucket_versioning" "main" {
  bucket = aws_s3_bucket.main.id

  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_lifecycle_configuration" "main" {
  bucket = aws_s3_bucket.main.id

  rule {
    id     = "expire-7d"
    status = "Enabled"

    expiration {
      days = 7
    }
  }
}

resource "aws_s3_bucket_policy" "main" {
  bucket = aws_s3_bucket.main.id

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Effect    = "Allow",
        Principal = "*",
        Action    = ["s3:GetObject"],
        Resource  = ["arn:aws:s3:::${aws_s3_bucket.main.bucket}/*"]
      }
    ]
  })
}

jooola avatar Jan 21 '25 21:01 jooola

I find this pretty unprofessional. Clearly, the official terraform provider for hcloud should cover all hcloud products.. This should not even be a discussion. It is really irrelevant for the user why it is not in the provider, it should be. I cannot classify using another provider as anything but a hack. That this even has to be said makes me very wary of using hcloud.

As a freelancer with many AWS customers that would love to migrate away to hcloud: it is exactly friction points like this that make them go "oh, I see" and not migrate. AWS bends backwards to make sure the user has a seamless experience, while it seems that when you tell hcloud something is not working as expected the response is an explanation why it's not working instead of an effort to make it work.

mzhaase avatar Jan 30 '25 12:01 mzhaase

Creating and destroying S3 Compatible storages vis this provider should be a no brainer. I am surprised that the team is saying they won't support it and instead direct us to use third party providers. If that's the case then surely it's easy for the team to add support for it.

If Hetzner currently supports only a subset of S3 then it's more of a reason to create your own terraform resource to prevent users from shooting themselves in the foot.

This should not even be a discussion.

I share the same sentiments as @mzhaase on this one

khawarizmus avatar Jan 31 '25 08:01 khawarizmus

@jooola Thanks for the config. Does every resource you mentioned in the example support the apply/destroy operation?

3deep5me avatar Feb 04 '25 14:02 3deep5me

I tried the AWS-Provider - it's much better! Thanks again @jooola

To get the aws_s3_bucket_lifecycle_configuration working you need to set transition_default_minimum_object_size = "" in the resource.

Here is my configuration with object lock, versioning, lifecycle policy and bucket policy. I could apply and destroy this config without any problems.

3deep5me avatar Feb 11 '25 10:02 3deep5me

right now we do not plan to add support for the Object Storage S3 API in this terraform provider. You can use any S3-compatible provider like the Minio provider instead.

If using other providers does not work for you, could you explain the issues you have with them and the benefits you see with adding the APIs to this provider?

Referring to an external provider seems fine. Creating a Bucket via MinIO Terraform Provider - Hetzner Docs It seems that this documentation should be extended by the aws provider above.

What we're really missing is a stabilization of the https://api.hetzner.cloud/v1/_object_storage_credentials API to create credentials, so that we can manage multiple buckets with restricted access.

Right now we resorted to using the internal API with a token from the web console.

provider "restapi" {
  alias = "hcloud_v1"
  uri                  = "https://api.hetzner.cloud/v1"
  write_returns_object = true
  debug                = true

  headers = {
    "Authorization" = "Bearer ${var.hcloud_token}"
    "Content-Type" = "application/json"
  }
}

# API is still private and only works with SPA tokens from HCloud Console.
# 1. Login to https://console.hetzner.cloud/projects/735113
# 2. Open Developer Console and record an API request
# 3. Find token in Authorization header
# 4. Export as TF_VAR_hcloud_token
resource "restapi_object" "object_storage_credentials" {
  for_each = local.projects

  provider = restapi.hcloud_v1
  path = "/_object_storage_credentials"
  data = jsonencode({description = each.key})
  id_attribute = "object_storage_credential/id"
}

Updated: The main problem with this approach is that it requires a manual token for tf refresh, hence doesn't blend well with automated tf workflows. There is also an undocumented fairly low limit of 10 S3 credentials per project atm. This together with the 100 bucket limit, forced us to go with just a few buckets and (manually created) credentials. Then we partitioned those using different encryption keys per service. Really not the preferred setup as we can also not restrict access by the encryption key hash.

MartinNowak avatar Feb 12 '25 16:02 MartinNowak

Another thing minio seemingly cannot set - besides not being able to set ACL correctly - is delete_protection. I find those pretty essential.

mfxa avatar May 02 '25 08:05 mfxa

It doesn't matter which provider I am using, but I would like to create new s3 credentials and groups with terraform.

twaldecker avatar Jul 15 '25 14:07 twaldecker

@3deep5me The following configuration should get yourself started using the aws terraform provider:

terraform { required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } } }

provider "aws" { skip_credentials_validation = true skip_metadata_api_check = true skip_requesting_account_id = true skip_region_validation = true

endpoints { s3 = "https://fsn1.your-objectstorage.com" }

region = "fsn1"

Please checks the docs on how to store those credentials safely.

access_key = "<YOUR-ACCESS-KEY>" secret_key = "<YOUR-SECRET-KEY>" }

resource "aws_s3_bucket" "main" { bucket = "my-bucket-a9c8ae4e" }

resource "aws_s3_bucket_acl" "main" { bucket = aws_s3_bucket.main.id acl = "private" }

resource "aws_s3_bucket_versioning" "main" { bucket = aws_s3_bucket.main.id

versioning_configuration { status = "Enabled" } }

resource "aws_s3_bucket_lifecycle_configuration" "main" { bucket = aws_s3_bucket.main.id

rule { id = "expire-7d" status = "Enabled"

expiration {
  days = 7
}

} }

resource "aws_s3_bucket_policy" "main" { bucket = aws_s3_bucket.main.id

policy = jsonencode({ Version = "2012-10-17", Statement = [ { Effect = "Allow", Principal = "", Action = ["s3:GetObject"], Resource = ["arn:aws:s3:::${aws_s3_bucket.main.bucket}/"] } ] }) }

It fails to create the aws_s3_bucket_lifecycle_configuration.

$tofu apply
aws_s3_bucket.main: Refreshing state... [id=ddht-storage-3]
aws_s3_bucket_policy.main: Refreshing state... [id=ddht-storage-3]
aws_s3_bucket_versioning.main: Refreshing state... [id=ddht-storage-3]
aws_s3_bucket_acl.main: Refreshing state... [id=ddht-storage-3,private]

OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

OpenTofu will perform the following actions:

  # aws_s3_bucket_lifecycle_configuration.main will be created
  + resource "aws_s3_bucket_lifecycle_configuration" "main" {
      + bucket                                 = "ddht-storage-3"
      + expected_bucket_owner                  = (known after apply)
      + id                                     = (known after apply)
      + transition_default_minimum_object_size = "all_storage_classes_128K"

      + rule {
          + id     = "expire-7d"
          + status = "Enabled"

          + expiration {
              + days                         = 365
              + expired_object_delete_marker = false
            }

          + filter {
            }
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  OpenTofu will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

aws_s3_bucket_lifecycle_configuration.main: Creating...
aws_s3_bucket_lifecycle_configuration.main: Still creating... [10s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [20s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [30s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [40s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [50s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [1m0s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [1m10s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [1m20s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [1m30s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [1m40s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [1m50s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [2m0s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [2m10s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [2m20s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [2m30s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [2m40s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [2m50s elapsed]
aws_s3_bucket_lifecycle_configuration.main: Still creating... [3m0s elapsed]
╷
│ Error: creating S3 Bucket (ddht-storage-3) Lifecycle Configuration
│
│   with aws_s3_bucket_lifecycle_configuration.main,
│   on main.tf line 48, in resource "aws_s3_bucket_lifecycle_configuration" "main":
│   48: resource "aws_s3_bucket_lifecycle_configuration" "main" {
│
│ While waiting: timeout while waiting for state to become 'true' (last state: 'false', timeout: 3m0s)

twaldecker avatar Jul 15 '25 14:07 twaldecker

I'm also running into a timeout error when trying to create a lifecycle config. with the AWS provider.

...
resource "aws_s3_bucket_lifecycle_configuration" "object_lock_lifecycle" {
  bucket = aws_s3_bucket.bucket.id
  transition_default_minimum_object_size = ""
  rule {
    id     = "cleanup-after-object-lock"
    status = "Enabled"

    filter {}

    noncurrent_version_expiration {
      noncurrent_days = 7
    }

    expiration {
      expired_object_delete_marker = true
    }
  }
}
...

BUT the config. seems to be created just fine.

aws s3api get-bucket-lifecycle-configuration --bucket <bucket-name> --endpoint-url=https://fsn1.your-objectstorage.com

{
    "Rules": [
        {
            "Expiration": {
                "ExpiredObjectDeleteMarker": true
            },
            "ID": "cleanup-after-object-lock",
            "Prefix": "",
            "Status": "Enabled",
            "NoncurrentVersionExpiration": {
                "NoncurrentDays": 7
            }
        }
    ]
}

Is this an issue with the provider or the Hetzner API?

tomtrix avatar Jul 20 '25 11:07 tomtrix

I swtiched over to the Minio provider several weeks ago, which worked out just fine.

Since today I'm not able to create any buckets via Terraform/OpenTofu anymore, due to an "inconsistent result".

TF gets the feedback "your bucket was created", but the subsequent read access fails because "the bucket was not found".

terraform {
  required_providers {
    minio = {
      source  = "aminueza/minio"
      version = "~> 3.0"
    }
  }
}

provider "minio" {
  minio_server = "fsn1.your-objectstorage.com"
  minio_region = "fsn1"
  minio_ssl    = true
}

resource "minio_s3_bucket" "bucket" {
  bucket         = "tkit-test123test-fsn1"
  acl            = "private"
}

Why doesn't Hetzner provide a TF provider that just works with their S3 setup? At the moment it's pure pain...

Can you please support us @jooola?

tomtrix avatar Aug 20 '25 20:08 tomtrix

Hey @tomtrix,

thanks for notifying us about an issue with BucketExist right after creating a new bucket. The Object Storage team made a change in the CreateBucket code to wait until the bucket is visible, which should fix the issue you are seeing. If you still encounter an issue with this, please open a ticket with Hetzner Support.

apricote avatar Aug 26 '25 08:08 apricote

Hey @tomtrix,

thanks for notifying us about an issue with BucketExist right after creating a new bucket. The Object Storage team made a change in the CreateBucket code to wait until the bucket is visible, which should fix the issue you are seeing. If you still encounter an issue with this, please open a ticket with Hetzner Support.

Aweseome this fixed the issue, thanks. :)

tomtrix avatar Sep 08 '25 10:09 tomtrix

I am already using the aws provider to manage DNS records. Now I need to provide another aws provider with different configuration. This is not recommended by terraform and feels like a hack. I wish I could manage buckets using the hcloud provider.

hannesortmeier avatar Nov 09 '25 15:11 hannesortmeier

Starting off with tf projects, the first thing for me is always to setup the remote state. Coming from AWS, Azure and the other providers, this is just natural and easy to do in my opinion. It's interesting to see that Hetzner points to other workarounds instead of working on integrating their great products in their own tooling.

So I am Interested in this being added as well, instead of hacking the way around with old classic AWS provider or other tools. Thanks. :)

PeterWunderlich avatar Nov 12 '25 14:11 PeterWunderlich

Remote state is part of the core terraform program, right? As far as I understood this is not something that providers can bring

apricote avatar Nov 13 '25 14:11 apricote

@apricote, yes, it is. Not sure if I missunderstand your point now or if I were not clear in my requirement. Usually, e.g. I create a S3 bucket at AWS, get the keys / credentials and setup the "s3" backend in my terraform repository, which is super easy, because I already have the AWS provider configured and initialized. Same for the rest of providers I used as of now.

Experience with my Hetzner setup was a bit different and I was quite surprised that I struggled with that part already. 😅 Maybe thats just me?

PeterWunderlich avatar Nov 13 '25 15:11 PeterWunderlich