noobaa-core
noobaa-core copied to clipboard
Add S3 GetObjectAttributes API Implementation
Explain the changes
- Create the file
s3_get_object_attributes.js
and add the API insrc/endpoint/s3/ops/index.js
andnb.d.ts
andobject_sdk.js
and the implementation in every namespace so it removes properties that are not in the schema ofread_object_md
and reuse it. Ins3_rest.js
add theattributes
inOBJECT_SUB_RESOURCES
(else it was using only GetObject instead). - In
s3_bucket_policy_utils.js
add theget_object_attributes
permissions, since this is the first case where we have 2 permissions, it is an array and I changed the code to iterate an array (see functionis_statement_fit_of_method_array
). Therefore, had to add the_.flatten
to ensure that the array that was added is also flattened. I also added tos3_rest.js
a JSDoc to the function_get_method_from_req
since it might be also an array and not just a string. - In
s3_bucket_policy_utils.js
addbreak
in every loop where it is found true (since we do not need to continue after we found a match. - In
namespace_fs.js
in function_get_object_info
add|| undefined
in case there is no version-id the value won't befalse
butundefined
. - In
src/upgrade/upgrade_scripts/5.15.6/upgrade_bucket_policy.js
do not use theget_object_attributes
key in the map, as it harms the function_create_actions_map
. - In
object_server.js
add the case for permission check where passing version-id. - In 'namespace_s3' for using the function
_get_s3_object_info
added the propertieschecksum
andobject_parts
, to add them had to add them in filesrc/sdk/nb.d.ts
under theinterface ObjectInfo
and add the definitions from AWS SDK V3 (where there was a type change defined in the file itself).
Issues: Fixed #8209
- The suggested fix is to add "HeadObject" with the returned XML.
List of GAPs:
- Encryption is not internally used.
- Condition headers (
If-Match
andIf-Unmodified-Since
) were not tested specifically in this API (as they were copied from HeadObject), and are not used in all namespaces. - In
object_server
there is a hard-coded check on the permission, it might be redundant today as we have ins3_rest
permission check by the bucket policy (more details can be found in this comment). Note: the origin of it is from theobject_api
where we haveanonymous: true
and it appears in a few actions. - Define all the headers that we work with as const.
Testing Instructions:
Automatic Tests:
-
sudo NC_CORETEST=true node ./node_modules/mocha/bin/mocha ./src/test/unit_tests/test_s3_bucket_policy.js -g 'bucket policy on get object attributes'
-
sudo NC_CORETEST=true node ./node_modules/mocha/bin/mocha ./src/test/unit_tests/test_bucketspace_versioning.js
-
make run-single-test testname=test_s3_bucket_policy.js CONTAINER_PLATFORM=linux/arm64
-
make run-single-test testname=test_s3_ops.js CONTAINER_PLATFORM=linux/arm64
(currently, in a local run it shows 2 errors that are not directly related: (1) "before all" hook for "should tag text file": RestError: The API version 2024-08-04 is not supported by Azurite; (2) "after all" hook for "should getObjectAttributes": NoSuchBucket: The specified bucket does not exist (the bucket was copied from the previous test).
Manual Tests:
NC
1) A basic test on a bucket with versioning disabled:
- Create an account with the CLI:
sudo node src/cmd/manage_nsfs account add --name <account-name> --new_buckets_path /tmp/nsfs_root1 --access_key <access-key> --secret_key <secret-key> --uid <uid> --gid <gid>
Note: before creating the account need to give permission to thenew_buckets_path
:chmod 777 /tmp/nsfs_root1
,chmod 777 /tmp/nsfs_root2
. - Start the NSFS server with:
sudo node src/cmd/nsfs --debug 5
Notes:
- I Change the
config.NSFS_CHECK_BUCKET_BOUNDARIES = false; //SDSD
because I’m using the/tmp/
and not/private/tmp/
.
- Create the alias for S3 service:
alias nc-user-1-s3=‘AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<secret-key> aws --no-verify-ssl --endpoint-url https://localhost:6443’
. - Check the connection to the endpoint and try to list the buckets (should be empty):
nc-user-1-s3 s3 ls; echo $?
- Add bucket to the account using AWS CLI:
nc-user-1-s3 s3 mb s3://bucket-1
(bucket-1
is the bucket name in this example) - Put an object:
nc-user-1-s3 s3api put-object --bucket bucket-1 --key hello.txt
- Get object attributes:
nc-user-1-s3 s3api get-object-attributes --bucket bucket-1 --key hello.txt --object-attributes "StorageClass" "ETag" "ObjectSize"
2) A basic test on a bucket with versioning enabled:
9. Put bucket versioning Enabled: nc-user-1-s3 s3api put-bucket-versioning --bucket bucket-1 --versioning-configuration Status=Enabled
10. Put an object: nc-user-1-s3 s3api put-object --bucket bucket-1 --key hello.txt
(save the version-id in the output)
11. Get object attributes with version-id: nc-user-1-s3 s3api get-object-attributes --bucket bucket-1 --key hello.txt --version-id <version-id-from-previous-call>--object-attributes "StorageClass" "ETag" "ObjectSize"
3) A basic test on a bucket with bucket policy
12. Create an additional account: sudo node src/cmd/manage_nsfs account add --name <account-name> --new_buckets_path /tmp/nsfs_root1 --access_key <access-key> --secret_key <secret-key> --uid <uid> --gid <gid>
13. Create the alias for S3 service:alias nc-user-2-s3=‘AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<secret-key> aws --no-verify-ssl --endpoint-url https://localhost:6443’
.
14. Add bucket policy: nc-user-1-s3 s3api put-bucket-policy --bucket bucket-1 --policy file://policy.json
policy.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS": [ "<account-name-2>" ] },
"Action": ["s3:GetObject", "s3:GetObjectAttributes"],
"Resource": [ "arn:aws:s3:::bucket-1/*", "arn:aws:s3:::bucket-1" ]
}
]
}
- Call get object attributes (from account2): `nc-user-2-s3 s3api get-object-attributes --bucket bucket-1 --key hello.txt --object-attributes "StorageClass" "ETag" "ObjectSize"
- Add bucket policy (versioned):
nc-user-1-s3 s3api put-bucket-policy --bucket bucket-1 --policy file://policy2.json
policy2.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS": [ "<account-name-2>" ] },
"Action": ["s3:GetObjectVersion", "s3:GetObjectVersionAttributes"],
"Resource": [ "arn:aws:s3:::bucket-1/*", "arn:aws:s3:::bucket-1" ]
}
]
}
- Call get object attributes with version-d:
nc-user-2-s3 s3api get-object-attributes --bucket bucket-1 --key hello.txt --version-id <version-id-from-previous-call>--object-attributes "StorageClass" "ETag" "ObjectSize"
Containerized:
- Build the images and install NooBaa system on Rancher Desktop (see guide).
Note:
nb
is an alias that runs the local operator frombuild/_output/bin
(alias created bydevenv
). - Wait for the default backing store pod to be in state Ready before starting the tests:
kubectl wait --for=condition=available backingstore/noobaa-default-backing-store --timeout=6m -n test1
- I'm using port-forward (in a different tab):
kubectl port-forward -n test1 service/s3 12443:443
PV - Create the alias for the admin - first, need to get the credentials:
nb status --show-secrets -n test1
and thenalias s3-nb-user-1='AWS_ACCESS_KEY=JGytelEGz3TzRWyUONZf AWS_SECRET_ACCESS_KEY=Xvu+qIexs2UXwQUN0H2vJ5QJuqXMMnjiuTzgPr0i aws --no-verify-ssl --endpoint-url https://localhost:12443'
- Check the connection to the endpoint and try to list the buckets (should have first.bucket):
s3-nb-user-1 s3 ls; echo $?
- Create the second account and create its alias:
nb account create user2 -n test1
and thennb account status user2 -n test1 --show-secrets
for the credentials and thenalias s3-nb-user-2='AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<secret-key> aws --no-verify-ssl --endpoint-url https://localhost:12443'
- Create a new bucket:
s3-nb-user-1 s3 mb s3://bucket1
. - Add bucket versioning:
s3-nb-user-1 s3api put-bucket-versioning --bucket bucket-1 --versioning-configuration Status=Enabled
- Put object:
s3-nb-user-1 s3api put-object --bucket bucket1 --key nice_day2
(save the version-id we will use it). - Check access to the new bucket by the new account (should be Access Denied):
s3-nb-user-2 s3 ls s3://bucket1
. - Add bucket policy:
s3-nb-user-1 s3api put-bucket-policy --bucket bucket1 --policy file://policy.json
policy.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS": [ "user2" ] },
"Action": ["s3:GetObject", "s3:GetObjectAttributes"],
"Resource": [ "arn:aws:s3:::bucket1/*", "arn:aws:s3:::bucket1" ]
}
]
}
- Call get object attributes (from account2): `s3-nb-user-2 s3api get-object-attributes --bucket bucket1 --key nice_day2--object-attributes "StorageClass" "ETag" "ObjectSize"
- Add bucket policy (versioned):
s3-nb-user-1 s3api put-bucket-policy --bucket bucket1 --policy file://policy2.json
policy2.json:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "AWS": [ "user2" ] },
"Action": ["s3:GetObjectVersion", "s3:GetObjectVersionAttributes"],
"Resource": [ "arn:aws:s3:::bucket1/*", "arn:aws:s3:::bucket1" ]
}
]
}
- Call get object attributes with version-id:
s3-nb-user-2 s3api get-object-attributes --bucket bucket1 --key nice_day2 --version-id <version-id-from-previous-call>--object-attributes "StorageClass" "ETag" "ObjectSize"
namespace S3 - Create namespacestore type aws-s3:
nb namespacestore create aws-s3 ns-shira-aws -n test1
- Create bucketclass:
nb bucketclass create namespace-bucketclass single bc1 --resource=ns-shira-aws -n test1
- Create OBC:
nb obc create obc1 --bucketclass=bc1 -n test1
- Create the alias (credentials from the printed output):
alias alias s3-nb-user-3='AWS_ACCESS_KEY_ID=<access-key> AWS_SECRET_ACCESS_KEY=<secret-key> aws --no-verify-ssl --endpoint-url https://localhost:12443'
- Put object:
s3-nb-user-3 s3api put-object --bucket obc1-0328429f-439b-42e2-8292-cc87c6f187e3 --key mimi
- Get objects attributes:
s3-nb-user-3 s3api get-object-attributes --bucket obc1-0328429f-439b-42e2-8292-cc87c6f187e3 --key mimi --object-attributes "StorageClass" "ETag" "ObjectSize"
- [X] Doc added/updated
- [X] Tests added