terraform-provider-b2
terraform-provider-b2 copied to clipboard
Can't Import Existing b2_bucket Resource
Problem
I can't import existing buckets into my Terraorm config.
terraform -v
Terraform v1.1.7
on darwin_amd64
+ provider registry.terraform.io/backblaze/b2 v0.8.0
+ provider registry.terraform.io/hashicorp/aws v4.8.0
Your version of Terraform is out of date! The latest version
is 1.1.8. You can update by downloading from https://www.terraform.io/downloads.html
Steps To Reproduce
- Create a bucket on Backblaze.
- Write some Terraform like
terraform {
providers {
b2 = {
source = "Backblaze/b2"
version = "0.8.0"
}
}
}
provider "b2" {
application_key_id = "blah"
application_key = "blah"
endpoint = "https://s3.some-region-000.backblazeb2.com"
}
resource "b2_bucket" "mybucket" {
bucket_name = "mybucket"
bucket_type = "allPrivate"
}
- Run
terraform initandterraform import b2_bucket.mybucket $id
Traceback
2022-04-17T23:37:40.880-0400 [ERROR] vertex "import b2_bucket.mybucket result" error: Traceback (most recent call last):
File "b2sdk/b2http.py", line 348, in _translate_errors
File "json/__init__.py", line 346, in loads
File "json/decoder.py", line 337, in decode
File "json/decoder.py", line 355, in raw_decode
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "b2_terraform/provider_tool.py", line 512, in run_command
File "b2_terraform/provider_tool.py", line 150, in run
File "b2_terraform/provider_tool.py", line 159, in provider_authorize_account
File "logfury/_logfury/trace_call.py", line 86, in wrapper
File "b2sdk/api.py", line 162, in authorize_account
File "b2sdk/session.py", line 113, in authorize_account
File "b2sdk/raw_api.py", line 377, in authorize_account
File "b2sdk/raw_api.py", line 371, in _post_json
File "b2sdk/b2http.py", line 245, in post_json_return_json
File "b2sdk/b2http.py", line 211, in post_content_return_json
File "b2sdk/b2http.py", line 413, in _translate_and_retry
File "b2sdk/b2http.py", line 398, in _translate_errors
b2sdk.exception.UnknownError: Unknown error: JSONDecodeError('Expecting value: line 1 column 1 (char 0)')
2022-04-17T23:37:40.881-0400 [ERROR] vertex "b2_bucket.mybucket (import id \"myid\")" error: Traceback (most recent call last):
File "b2sdk/b2http.py", line 348, in _translate_errors
File "json/__init__.py", line 346, in loads
File "json/decoder.py", line 337, in decode
File "json/decoder.py", line 355, in raw_decode
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "b2_terraform/provider_tool.py", line 512, in run_command
File "b2_terraform/provider_tool.py", line 150, in run
File "b2_terraform/provider_tool.py", line 159, in provider_authorize_account
File "logfury/_logfury/trace_call.py", line 86, in wrapper
File "b2sdk/api.py", line 162, in authorize_account
File "b2sdk/session.py", line 113, in authorize_account
File "b2sdk/raw_api.py", line 377, in authorize_account
File "b2sdk/raw_api.py", line 371, in _post_json
File "b2sdk/b2http.py", line 245, in post_json_return_json
File "b2sdk/b2http.py", line 211, in post_content_return_json
File "b2sdk/b2http.py", line 413, in _translate_and_retry
File "b2sdk/b2http.py", line 398, in _translate_errors
b2sdk.exception.UnknownError: Unknown error: JSONDecodeError('Expecting value: line 1 column 1 (char 0)')
2022-04-17T23:37:40.883-0400 [DEBUG] provider.stdio: received EOF, stopping recv loop: err="rpc error: code = Unavailable desc = transport is closing"
2022-04-17T23:37:40.890-0400 [DEBUG] provider: plugin process exited: path=.terraform/providers/registry.terraform.io/backblaze/b2/0.8.0/darwin_amd64/terraform-provider-b2_0.8.0 pid=42118
2022-04-17T23:37:40.890-0400 [DEBUG] provider: plugin exited
Hi @estheruary - that's weird, we just added that function, tested it during development and it was fine. Could you please run it again with debug mode enabled? See here how to do that: https://github.com/Backblaze/terraform-provider-b2/#debugging
Here's my debug logs. This is the whole main.tf.
terraform {
required_providers {
b2 = {
source = "Backblaze/b2"
version = "0.8.0"
}
}
}
provider "b2" {
application_key_id = "mykeyid"
application_key = "mykey"
endpoint = "https://s3.us-west-000.backblazeb2.com"
}
resource "b2_bucket" "mybucket" {
bucket_name = "inspiredbyes"
bucket_type = "allPrivate"
}
It would seem that b2-sdk-python fails to deserialize a json-formatted error that is returned when calling listBuckets on the first byte. This seems like an environment error.
Could you please try to use b2 command line tool on that account and execute b2 get-bucket --verbose my_bucket from the same environment?
Steps performed.
- Downloaded the b2 CLI from here.
- Ran
b2 authorize-accountwith the same credentials as Terraform. - Ran
b2 get-bucket mybucket.
Here's the output.
INFO:__main__:// ======================================== 3.3.0 ======================================== \\
DEBUG:__main__:platform is macOS-10.16-x86_64-i386-64bit
DEBUG:__main__:Python version is CPython 3.10.4 (main, Mar 24 2022, 14:03:15) [Clang 12.0.0 (clang-1200.0.32.29)]
DEBUG:__main__:b2sdk version is 1.15.0
DEBUG:__main__:locale is ('en_US', 'UTF-8')
DEBUG:__main__:filesystem encoding is utf-8
DEBUG:b2sdk.v2.api:calling B2Session(account_info=None, cache=None, api_config=<b2sdk.api_config.B2HttpApiConfig object at 0x10e060b20>)
DEBUG:b2sdk.api:calling B2Session(account_info=None, cache=None, api_config=<b2sdk.api_config.B2HttpApiConfig object at 0x10e060b20>)
DEBUG:b2sdk.account_info.sqlite_account_info:calling SqliteAccountInfo._get_user_account_info_path(cls=<class 'b2sdk.account_info.sqlite_account_info.SqliteAccountInfo'>, file_name=None, profile=None)
DEBUG:b2sdk.account_info.sqlite_account_info:SqliteAccountInfo file path to use: /Users/--/.b2_account_info
DEBUG:b2sdk.account_info.upload_url_pool:calling UploadUrlPool()
DEBUG:b2sdk.account_info.upload_url_pool:calling UploadUrlPool()
DEBUG:b2sdk.account_info.upload_url_pool:calling UploadUrlPool()
DEBUG:b2sdk.account_info.upload_url_pool:calling UploadUrlPool()
DEBUG:b2sdk.api:calling FileVersionFactory(api=<b2sdk.v2.api.B2Api object at 0x10e02f520>)
DEBUG:b2sdk.api:calling DownloadVersionFactory(api=<b2sdk.v2.api.B2Api object at 0x10e02f520>)
DEBUG:b2sdk.v2.api:calling Services(api=<b2sdk.v2.api.B2Api object at 0x10e02f520>, max_upload_workers=10, max_copy_workers=10, max_download_workers=None, save_to_buffer_size=None, check_download_hash=True)
DEBUG:b2sdk.api:calling Services(api=<b2sdk.v2.api.B2Api object at 0x10e02f520>, max_upload_workers=10, max_copy_workers=10, max_download_workers=None, save_to_buffer_size=None, check_download_hash=True)
DEBUG:b2sdk.v2.transfer:calling LazyThreadPool(max_workers=10, kwargs=<class 'inspect._empty'>)
DEBUG:b2sdk.utils.thread_pool:calling ThreadPoolExecutor(max_workers=10, thread_name_prefix='', initializer=None, initargs=())
DEBUG:b2sdk.transfer.inbound.downloader.abstract:calling ThreadPoolExecutor(max_workers=10, thread_name_prefix='', initializer=None, initargs=())
DEBUG:b2sdk.v2.transfer:calling LazyThreadPool(max_workers=None, kwargs=<class 'inspect._empty'>)
DEBUG:b2sdk.transfer.inbound.download_manager:calling ParallelDownloader(min_part_size=104857600, max_streams=None, kwargs={'min_chunk_size': 8192, 'max_chunk_size': 1048576, 'align_factor': None, 'thread_pool': <b2sdk.v2.transfer.LazyThreadPool object at 0x10e0607c0>, 'check_hash': True})
DEBUG:b2sdk.transfer.inbound.download_manager:calling AbstractDownloader(thread_pool=<b2sdk.v2.transfer.LazyThreadPool object at 0x10e0607c0>, force_chunk_size=None, min_chunk_size=8192, max_chunk_size=1048576, align_factor=None, check_hash=True, kwargs=<class 'inspect._empty'>)
INFO:__main__:starting command [__main__.GetBucket] with arguments: ['b2', 'get-bucket', '--verbose', 'mybucket']
DEBUG:b2sdk.api:calling B2Api.list_buckets(self=<b2sdk.v2.api.B2Api object at 0x10e02f520>, bucket_name='mybucket', bucket_id=None)
DEBUG:b2sdk.api:calling B2Api.check_bucket_name_restrictions(self=<b2sdk.v2.api.B2Api object at 0x10e02f520>, bucket_name='mybucket')
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api000.backblazeb2.com:443
DEBUG:urllib3.connectionpool:https://api000.backblazeb2.com:443 "POST /b2api/v2/b2_list_buckets HTTP/1.1" 200 1002
DEBUG:b2sdk.v2.api:calling Bucket(api=<b2sdk.v2.api.B2Api object at 0x10e02f520>, id_='65cdadd1ef03fad37dc80d12', name='mybucket', type_='allPrivate', bucket_info={}, cors_rules=[], lifecycle_rules=[{'daysFromHidingToDeleting': 1, 'daysFromUploadingToHiding': None, 'fileNamePrefix': ''}], revision=3, bucket_dict={'accountId': '5dd1f3a3d8d2', 'bucketId': '65cdadd1ef03fad37dc80d12', 'bucketInfo': {}, 'bucketName': 'mybucket', 'bucketType': 'allPrivate', 'corsRules': [], 'defaultServerSideEncryption': {'isClientAuthorizedToRead': True, 'value': {'algorithm': 'AES256', 'mode': 'SSE-B2'}}, 'fileLockConfiguration': {'isClientAuthorizedToRead': True, 'value': {'defaultRetention': {'mode': None, 'period': None}, 'isFileLockEnabled': False}}, 'lifecycleRules': [{'daysFromHidingToDeleting': 1, 'daysFromUploadingToHiding': None, 'fileNamePrefix': ''}], 'options': ['s3'], 'replicationConfiguration': {'isClientAuthorizedToRead': True, 'value': None}, 'revision': 3}, options_set={'s3'}, default_server_side_encryption=<EncryptionSetting(EncryptionMode.SSE_B2, EncryptionAlgorithm.AES256, None)>, default_retention=BucketRetentionSetting(None, None), is_file_lock_enabled=False)
DEBUG:b2sdk.api:calling Bucket(api=<b2sdk.v2.api.B2Api object at 0x10e02f520>, id_='65cdadd1ef03fad37dc80d12', name='mybucket', type_='allPrivate', bucket_info={}, cors_rules=[], lifecycle_rules=[{'daysFromHidingToDeleting': 1, 'daysFromUploadingToHiding': None, 'fileNamePrefix': ''}], revision=3, bucket_dict={'accountId': '5dd1f3a3d8d2', 'bucketId': '65cdadd1ef03fad37dc80d12', 'bucketInfo': {}, 'bucketName': 'mybucket', 'bucketType': 'allPrivate', 'corsRules': [], 'defaultServerSideEncryption': {'isClientAuthorizedToRead': True, 'value': {'algorithm': 'AES256', 'mode': 'SSE-B2'}}, 'fileLockConfiguration': {'isClientAuthorizedToRead': True, 'value': {'defaultRetention': {'mode': None, 'period': None}, 'isFileLockEnabled': False}}, 'lifecycleRules': [{'daysFromHidingToDeleting': 1, 'daysFromUploadingToHiding': None, 'fileNamePrefix': ''}], 'options': ['s3'], 'replicationConfiguration': {'isClientAuthorizedToRead': True, 'value': None}, 'revision': 3}, options_set={'s3'}, default_server_side_encryption=<EncryptionSetting(EncryptionMode.SSE_B2, EncryptionAlgorithm.AES256, None)>, default_retention=BucketRetentionSetting(None, None), is_file_lock_enabled=False)
DEBUG:b2sdk.account_info.sqlite_account_info:calling SqliteAccountInfo.save_bucket(self=<b2sdk.account_info.sqlite_account_info.SqliteAccountInfo object at 0x10e060d90>, bucket=Bucket<65cdadd1ef03fad37dc80d12,mybucket,allPrivate>)
INFO:__main__:\\ ======================================== exit=0 ======================================== //
{
"accountId": "5dd1f3a3d8d2",
"bucketId": "65cdadd1ef03fad37dc80d12",
"bucketInfo": {},
"bucketName": "mybucket",
"bucketType": "allPrivate",
"corsRules": [],
"defaultRetention": {
"mode": null
},
"defaultServerSideEncryption": {
"algorithm": "AES256",
"mode": "SSE-B2"
},
"isFileLockEnabled": false,
"lifecycleRules": [
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
"fileNamePrefix": ""
}
],
"options": [
"s3"
],
"revision": 3
}
Hi,
I was running into a similar issue, I suspected that it could be related to the endpoint configuration in Terraform provider. By removing the endpoint entry, it worked fine.
@ppolewicz
I decided to look into this a little bit further, hoping maybe I could help fix a bug 😄
It would appear that endpoint configuration is not related to the S3 API endpoint, but realm when calling B2Api.authorize_account
If I am not mistaken, endpoint is being used here and therefore not related to S3 API at all.
This is a little confusing and I think it would be best to clear up the documentation.
Thanks
Are you saying that endpoint used for realm selection gets mixed up with the S3 endpoint?
Yes - for most people working with B2, 'endpoint' means the S3 endpoint shown in the bucket details. I think changing endpoint to realm or environment would make it much more clear.