noobaa-core icon indicating copy to clipboard operation
noobaa-core copied to clipboard

Add S3 GetObjectAttributes API Implementation

Open shirady opened this issue 4 months ago • 0 comments

Explain the changes

  1. Create the file s3_get_object_attributes.js and add the API in src/endpoint/s3/ops/index.js and nb.d.ts and object_sdk.js and the implementation in every namespace so it removes properties that are not in the schema of read_object_md and reuse it. In s3_rest.js add the attributes in OBJECT_SUB_RESOURCES (else it was using only GetObject instead).
  2. In s3_bucket_policy_utils.js add the get_object_attributes permissions, since this is the first case where we have 2 permissions, it is an array and I changed the code to iterate an array (see function is_statement_fit_of_method_array). Therefore, had to add the _.flatten to ensure that the array that was added is also flattened. I also added to s3_rest.js a JSDoc to the function _get_method_from_req since it might be also an array and not just a string.
  3. In s3_bucket_policy_utils.js add break in every loop where it is found true (since we do not need to continue after we found a match.
  4. In namespace_fs.js in function _get_object_info add || undefined in case there is no version-id the value won't be false but undefined.
  5. In src/upgrade/upgrade_scripts/5.15.6/upgrade_bucket_policy.js do not use the get_object_attributes key in the map, as it harms the function _create_actions_map.
  6. In object_server.js add the case for permission check where passing version-id.
  7. In 'namespace_s3' for using the function _get_s3_object_info added the properties checksum and object_parts, to add them had to add them in file src/sdk/nb.d.ts under the interface ObjectInfo and add the definitions from AWS SDK V3 (where there was a type change defined in the file itself).

Issues: Fixed #8209

  1. The suggested fix is to add "HeadObject" with the returned XML.

List of GAPs:

  • Encryption is not internally used.
  • Condition headers (If-Match and If-Unmodified-Since) were not tested specifically in this API (as they were copied from HeadObject), and are not used in all namespaces.
  • In object_server there is a hard-coded check on the permission, it might be redundant today as we have in s3_rest permission check by the bucket policy (more details can be found in this comment). Note: the origin of it is from the object_api where we have anonymous: true and it appears in a few actions.
  • Define all the headers that we work with as const.

Testing Instructions:

Automatic Tests:

  1. sudo NC_CORETEST=true node ./node_modules/mocha/bin/mocha ./src/test/unit_tests/test_s3_bucket_policy.js -g 'bucket policy on get object attributes'
  2. sudo NC_CORETEST=true node ./node_modules/mocha/bin/mocha ./src/test/unit_tests/test_bucketspace_versioning.js
  3. make run-single-test testname=test_s3_bucket_policy.js CONTAINER_PLATFORM=linux/arm64
  4. make run-single-test testname=test_s3_ops.js CONTAINER_PLATFORM=linux/arm64 (currently, in a local run it shows 2 errors that are not directly related: (1) "before all" hook for "should tag text file": RestError: The API version 2024-08-04 is not supported by Azurite; (2) "after all" hook for "should getObjectAttributes": NoSuchBucket: The specified bucket does not exist (the bucket was copied from the previous test).

Manual Tests:

NC

1) A basic test on a bucket with versioning disabled:

  1. Create an account with the CLI: sudo node src/cmd/manage_nsfs account add --name <account-name> --new_buckets_path /tmp/nsfs_root1 --access_key <access-key> --secret_key <secret-key> --uid <uid> --gid <gid> Note: before creating the account need to give permission to the new_buckets_path: chmod 777 /tmp/nsfs_root1, chmod 777 /tmp/nsfs_root2.
  2. Start the NSFS server with: sudo node src/cmd/nsfs --debug 5 Notes:
  • I Change the config.NSFS_CHECK_BUCKET_BOUNDARIES = false; //SDSD because I’m using the /tmp/ and not /private/tmp/.
  1. Create the alias for S3 service:alias nc-user-1-s3=‘AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<secret-key> aws --no-verify-ssl --endpoint-url https://localhost:6443’.
  2. Check the connection to the endpoint and try to list the buckets (should be empty): nc-user-1-s3 s3 ls; echo $?
  3. Add bucket to the account using AWS CLI: nc-user-1-s3 s3 mb s3://bucket-1 (bucket-1 is the bucket name in this example)
  4. Put an object: nc-user-1-s3 s3api put-object --bucket bucket-1 --key hello.txt
  5. Get object attributes: nc-user-1-s3 s3api get-object-attributes --bucket bucket-1 --key hello.txt --object-attributes "StorageClass" "ETag" "ObjectSize"

2) A basic test on a bucket with versioning enabled: 9. Put bucket versioning Enabled: nc-user-1-s3 s3api put-bucket-versioning --bucket bucket-1 --versioning-configuration Status=Enabled 10. Put an object: nc-user-1-s3 s3api put-object --bucket bucket-1 --key hello.txt (save the version-id in the output) 11. Get object attributes with version-id: nc-user-1-s3 s3api get-object-attributes --bucket bucket-1 --key hello.txt --version-id <version-id-from-previous-call>--object-attributes "StorageClass" "ETag" "ObjectSize"

3) A basic test on a bucket with bucket policy 12. Create an additional account: sudo node src/cmd/manage_nsfs account add --name <account-name> --new_buckets_path /tmp/nsfs_root1 --access_key <access-key> --secret_key <secret-key> --uid <uid> --gid <gid> 13. Create the alias for S3 service:alias nc-user-2-s3=‘AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<secret-key> aws --no-verify-ssl --endpoint-url https://localhost:6443’. 14. Add bucket policy: nc-user-1-s3 s3api put-bucket-policy --bucket bucket-1 --policy file://policy.json policy.json:

{
  "Version": "2012-10-17",
  "Statement": [ 
    { 
     "Effect": "Allow", 
     "Principal": { "AWS": [ "<account-name-2>" ] }, 
     "Action": ["s3:GetObject", "s3:GetObjectAttributes"], 
     "Resource": [ "arn:aws:s3:::bucket-1/*", "arn:aws:s3:::bucket-1" ] 
    }
  ]
}
  1. Call get object attributes (from account2): `nc-user-2-s3 s3api get-object-attributes --bucket bucket-1 --key hello.txt --object-attributes "StorageClass" "ETag" "ObjectSize"
  2. Add bucket policy (versioned): nc-user-1-s3 s3api put-bucket-policy --bucket bucket-1 --policy file://policy2.json policy2.json:
{
  "Version": "2012-10-17",
  "Statement": [ 
    { 
     "Effect": "Allow", 
     "Principal": { "AWS": [ "<account-name-2>" ] }, 
     "Action": ["s3:GetObjectVersion", "s3:GetObjectVersionAttributes"], 
     "Resource": [ "arn:aws:s3:::bucket-1/*", "arn:aws:s3:::bucket-1" ] 
    }
  ]
}
  1. Call get object attributes with version-d: nc-user-2-s3 s3api get-object-attributes --bucket bucket-1 --key hello.txt --version-id <version-id-from-previous-call>--object-attributes "StorageClass" "ETag" "ObjectSize"

Containerized:

  1. Build the images and install NooBaa system on Rancher Desktop (see guide). Note: nb is an alias that runs the local operator from build/_output/bin (alias created by devenv).
  2. Wait for the default backing store pod to be in state Ready before starting the tests: kubectl wait --for=condition=available backingstore/noobaa-default-backing-store --timeout=6m -n test1
  3. I'm using port-forward (in a different tab): kubectl port-forward -n test1 service/s3 12443:443 PV
  4. Create the alias for the admin - first, need to get the credentials: nb status --show-secrets -n test1 and then alias s3-nb-user-1='AWS_ACCESS_KEY=JGytelEGz3TzRWyUONZf AWS_SECRET_ACCESS_KEY=Xvu+qIexs2UXwQUN0H2vJ5QJuqXMMnjiuTzgPr0i aws --no-verify-ssl --endpoint-url https://localhost:12443'
  5. Check the connection to the endpoint and try to list the buckets (should have first.bucket): s3-nb-user-1 s3 ls; echo $?
  6. Create the second account and create its alias: nb account create user2 -n test1 and then nb account status user2 -n test1 --show-secrets for the credentials and then alias s3-nb-user-2='AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<secret-key> aws --no-verify-ssl --endpoint-url https://localhost:12443'
  7. Create a new bucket: s3-nb-user-1 s3 mb s3://bucket1.
  8. Add bucket versioning: s3-nb-user-1 s3api put-bucket-versioning --bucket bucket-1 --versioning-configuration Status=Enabled
  9. Put object: s3-nb-user-1 s3api put-object --bucket bucket1 --key nice_day2 (save the version-id we will use it).
  10. Check access to the new bucket by the new account (should be Access Denied): s3-nb-user-2 s3 ls s3://bucket1.
  11. Add bucket policy: s3-nb-user-1 s3api put-bucket-policy --bucket bucket1 --policy file://policy.json policy.json:
{
  "Version": "2012-10-17",
  "Statement": [ 
    { 
     "Effect": "Allow", 
     "Principal": { "AWS": [ "user2" ] }, 
     "Action": ["s3:GetObject", "s3:GetObjectAttributes"], 
     "Resource": [ "arn:aws:s3:::bucket1/*", "arn:aws:s3:::bucket1" ] 
    }
  ]
}
  1. Call get object attributes (from account2): `s3-nb-user-2 s3api get-object-attributes --bucket bucket1 --key nice_day2--object-attributes "StorageClass" "ETag" "ObjectSize"
  2. Add bucket policy (versioned): s3-nb-user-1 s3api put-bucket-policy --bucket bucket1 --policy file://policy2.json policy2.json:
{
  "Version": "2012-10-17",
  "Statement": [ 
    { 
     "Effect": "Allow", 
     "Principal": { "AWS": [ "user2" ] }, 
     "Action": ["s3:GetObjectVersion", "s3:GetObjectVersionAttributes"], 
     "Resource": [ "arn:aws:s3:::bucket1/*", "arn:aws:s3:::bucket1" ] 
    }
  ]
}
  1. Call get object attributes with version-id: s3-nb-user-2 s3api get-object-attributes --bucket bucket1 --key nice_day2 --version-id <version-id-from-previous-call>--object-attributes "StorageClass" "ETag" "ObjectSize" namespace S3
  2. Create namespacestore type aws-s3: nb namespacestore create aws-s3 ns-shira-aws -n test1
  3. Create bucketclass: nb bucketclass create namespace-bucketclass single bc1 --resource=ns-shira-aws -n test1
  4. Create OBC: nb obc create obc1 --bucketclass=bc1 -n test1
  5. Create the alias (credentials from the printed output): alias alias s3-nb-user-3='AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<secret-key> aws --no-verify-ssl --endpoint-url https://localhost:12443'
  6. Put object: s3-nb-user-3 s3api put-object --bucket obc1-0328429f-439b-42e2-8292-cc87c6f187e3 --key mimi
  7. Get objects attributes: s3-nb-user-3 s3api get-object-attributes --bucket obc1-0328429f-439b-42e2-8292-cc87c6f187e3 --key mimi --object-attributes "StorageClass" "ETag" "ObjectSize"
  • [X] Doc added/updated
  • [X] Tests added

shirady avatar Sep 29 '24 09:09 shirady