amazon-ecs-agent icon indicating copy to clipboard operation
amazon-ecs-agent copied to clipboard

EFS access_point posix_user: How to add task_definition volume without CannotCreateContainerError: operation not permitted (UID:GID mismatch)

Open Cameronsplaze opened this issue 5 months ago • 3 comments

I have a EC2/ECS, that runs a single container/task. This task tries to bind a EFS access point for persistent storage. I want all the files on the EFS to be 1000:1000 by default. How you normally do this is by creating an EFS access_point and set posix_user, if the container has the directory as 0:0 inside the container? (The core of the problem is in the Volumes class at the end, in the reproducible example).

The Error

If the directory you're trying to mount over INSIDE the container exists already is owned by 1000:1000 already, everything works great! But if it's owned by 0:0 instead, you get the following error (Indented just for readability):

CannotCreateContainerError:
    Error response from daemon:
        failed to copy file info for
            /var/lib/ecs/volumes/ecs-PosixBugStackValheimContainerNestedStackTaskDefinition8ABA8AF7-2-config-b4b9d9998e91d583f801:
        failed to chown
            /var/lib/ecs/volumes/ecs-PosixBugStackValheimContainerNestedStackTaskDefinition8ABA8AF7-2-config-b4b9d9998e91d583f801:
        lchown
            /var/lib/ecs/volumes/ecs-PosixBugStackValheimContainerNestedStackTaskDefinition8ABA8AF7-2-config-b4b9d9998e91d583f801:
        operation not permitted

The few things I could find on this error say to make sure the UID:GID should match to fix it, but is there anything you can do when the container is a third party that you can't change?

  • This GH Issue seems like a similar problem, does ECS/EFS have an equivalent "no_copy" flag I can test with?
  • Is there a way I can remove the pre-existing /data container path before mounting the access_point, so the two don't conflict?
  • Is there a way to add a second access_point for /, move the posix user to this instead of the EFS, but have it force all the files to match it's posix permissions?

What I've Tried

I've been at this for about a month, but here's what I remember trying:

  • A second access point with posix_user, but removing it from the EFS access point: This lets the container start up, but the files are still owned by root when I access them through the second one. It's only files created through the second one that would have the right permissions.
  • Just chowning the files: The problem is when to do it. If I do it when the EC2 starts, any files the container creates afterwards won't have the right permissions until the next restart.
  • Tried adding ecs.Capability.ALL to the container's linux parameters in case it was a kernel permissions issue instead, no luck

Reproducible CDK Code

I tried to make it as small as I could, but since you need a ECS Cluster / ASG, and those need a VPC, it kinda snowballed. Sorry about that.

The top of this file has two config's. I made it so the one not commented out is giving the above error, but you can switch to the commented out one to verify that if the container already has 1000:1000 permissions, everything works. Both containers are also 3rd party too.

./app.py:

#!/usr/bin/env python3
import os

import aws_cdk as cdk

from aws_posix_bug.aws_posix_bug_stack import AwsPosixBugStack, config


app = cdk.App()
AwsPosixBugStack(app,
    f"PosixBugStack-{config["Id"]}",
    env=cdk.Environment(
        account=os.getenv('CDK_DEFAULT_ACCOUNT'),
        region=os.getenv('CDK_DEFAULT_REGION'),
    ),
)

app.synth()

./aws_posix_bug/aws_posix_bug_stack.py:

from aws_cdk import (
    Stack,
    RemovalPolicy,
    NestedStack,
    aws_ecs as ecs,
    aws_ec2 as ec2,
    aws_efs as efs,
    aws_iam as iam,
    aws_autoscaling as autoscaling,
)
from constructs import Construct


# config = {
#     "Id": "Minecraft",
#     "Image": "itzg/minecraft-server",
#     "Environment": {
#         "EULA": "True",
#     },
#     "Path": "/data",
# }

config = {
    "Id": "Valheim",
    "Image": "lloesche/valheim-server",
    "Environment": {},
    "Path": "/config",
}

### Nested Stack info:
# https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.NestedStack.html
class Container(NestedStack):
    def __init__(
        self,
        scope: Construct,
        **kwargs
    ) -> None:
        super().__init__(scope, "ContainerNestedStack", **kwargs)
        container_id_alpha = "".join(e for e in config["Id"].title() if e.isalpha())
        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs.TaskDefinition.html
        self.task_definition = ecs.Ec2TaskDefinition(self, "TaskDefinition")

        ## Details for add_container:
        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs.TaskDefinition.html#addwbrcontainerid-props
        ## And what it returns:
        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs.ContainerDefinition.html
        self.container = self.task_definition.add_container(
            container_id_alpha,
            command=["sleep", "infinity"],
            image=ecs.ContainerImage.from_registry(config["Image"]),
            essential=True,
            memory_reservation_mib=2*1024,
            environment=config["Environment"],
        )

class Volumes(NestedStack):
    def __init__(
        self,
        scope: Construct,
        vpc: ec2.Vpc,
        task_definition: ecs.Ec2TaskDefinition,
        container: ecs.ContainerDefinition,
        sg_efs_traffic: ec2.SecurityGroup,
        **kwargs,
    ) -> None:
        super().__init__(scope, "VolumesNestedStack", **kwargs)

        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_efs.AccessPointOptions.html#createacl
        self.efs_ap_acl = efs.Acl(owner_uid="1000", owner_gid="1000", permissions="755")
        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_efs.PosixUser.html
        posix_user = efs.PosixUser(uid=self.efs_ap_acl.owner_uid, gid=self.efs_ap_acl.owner_gid)

        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_efs.FileSystem.html
        self.efs_file_system = efs.FileSystem(
            self,
            f"Efs-{config["Id"]}",
            vpc=vpc,
            removal_policy=RemovalPolicy.DESTROY,
            security_group=sg_efs_traffic,
            allow_anonymous_access=False,
            enable_automatic_backups=False,
            encrypted=True,
        )
        self.efs_file_system.grant_read_write(task_definition.task_role)
        access_point_name = config["Path"].strip("/").replace("/", "-")
        ## Creating an access point:
        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_efs.FileSystem.html#addwbraccesswbrpointid-accesspointoptions
        ## What it returns:
        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_efs.AccessPoint.html
        container_access_point = self.efs_file_system.add_access_point(
            access_point_name,
            create_acl=self.efs_ap_acl,
            path=config["Path"],
            posix_user=posix_user,
        )

        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs.TaskDefinition.html#aws_cdk.aws_ecs.TaskDefinition.add_volume
        task_definition.add_volume(
            name=access_point_name,
            # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs.EfsVolumeConfiguration.html
            efs_volume_configuration=ecs.EfsVolumeConfiguration(
                file_system_id=self.efs_file_system.file_system_id,
                # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs.AuthorizationConfig.html
                authorization_config=ecs.AuthorizationConfig(
                    access_point_id=container_access_point.access_point_id,
                    iam="ENABLED",
                ),
                transit_encryption="ENABLED",
            ),
        )
        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs.ContainerDefinition.html#addwbrmountwbrpointsmountpoints
        container.add_mount_points(
            ecs.MountPoint(
                container_path=config["Path"],
                source_volume=access_point_name,
                read_only=False,
            )
        )

class EcsAsg(NestedStack):
    def __init__(
        self,
        scope: Construct,
        leaf_construct_id: str,
        vpc: ec2.Vpc,
        task_definition: ecs.Ec2TaskDefinition,
        sg_container_traffic: ec2.SecurityGroup,
        efs_file_system: efs.FileSystem,
        **kwargs,
    ) -> None:
        super().__init__(scope, "EcsAsgNestedStack", **kwargs)

        self.ecs_cluster = ecs.Cluster(
            self,
            "EcsCluster",
            cluster_name=f"{leaf_construct_id}-ecs-cluster",
            vpc=vpc,
        )

        self.ec2_role = iam.Role(
            self,
            "Ec2ExecutionRole",
            assumed_by=iam.ServicePrincipal("ec2.amazonaws.com"),
            description="The instance's permissions (HOST of the container)",
        )
        self.ec2_role.add_managed_policy(iam.ManagedPolicy.from_aws_managed_policy_name("service-role/AmazonEC2ContainerServiceforEC2Role"))
        efs_file_system.grant_root_access(self.ec2_role)

        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.LaunchTemplate.html
        asg_launch_template = ec2.LaunchTemplate(
            self,
            "AsgLaunchTemplate",
            instance_type=ec2.InstanceType("m5.large"),
            machine_image=ecs.EcsOptimizedImage.amazon_linux2023(),
            security_group=sg_container_traffic,
            role=self.ec2_role,
            http_tokens=ec2.LaunchTemplateHttpTokens.REQUIRED,
            require_imdsv2=True,
        )

        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_autoscaling.AutoScalingGroup.html
        self.auto_scaling_group = autoscaling.AutoScalingGroup(
            self,
            "Asg",
            vpc=vpc,
            launch_template=asg_launch_template,
            min_capacity=0,
            max_capacity=1,
            new_instances_protected_from_scale_in=False,
        )

        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs.AsgCapacityProvider.html
        self.capacity_provider = ecs.AsgCapacityProvider(
            self,
            "AsgCapacityProvider",
            capacity_provider_name=f"{config["Id"]}-capacity-provider",
            auto_scaling_group=self.auto_scaling_group,
            enable_managed_termination_protection=False,
            enable_managed_draining=False,
            enable_managed_scaling=False,
        )
        self.ecs_cluster.add_asg_capacity_provider(self.capacity_provider)

        ## This creates a service using the EC2 launch type on an ECS cluster
        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ecs.Ec2Service.html
        self.ec2_service = ecs.Ec2Service(
            self,
            "Ec2Service",
            cluster=self.ecs_cluster,
            task_definition=task_definition,
            enable_ecs_managed_tags=True,
            daemon=True,
            min_healthy_percent=0,
            max_healthy_percent=100,
        )

class AwsPosixBugStack(Stack):
    def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
        super().__init__(scope, construct_id, **kwargs)

        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.Vpc.html
        self.vpc = ec2.Vpc(
            self,
            "Vpc",
            nat_gateways=0,
            max_azs=1,
            subnet_configuration=[
                ec2.SubnetConfiguration(
                    name=f"public-{construct_id}-sn",
                    subnet_type=ec2.SubnetType.PUBLIC,
                )
            ],
            restrict_default_security_group=True,
        )

        # https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_ec2.SecurityGroup.html
        self.sg_container_traffic = ec2.SecurityGroup(
            self,
            "SgContainerTraffic",
            vpc=self.vpc,
            allow_all_outbound=True,
        )
        self.sg_efs_traffic = ec2.SecurityGroup(
            self,
            "SgEfsTraffic",
            vpc=self.vpc,
            allow_all_outbound=False,
        )
        self.sg_efs_traffic.connections.allow_from(
            self.sg_container_traffic,
            port_range=ec2.Port.tcp(2049),
        )
        ## Tie the Nested Stacks Finally:
        self.container_nested_stack = Container(
            self,
            description=f"Container Logic for {construct_id}",
        )
        self.volumes_nested_stack = Volumes(
            self,
            description=f"Volume Logic for {construct_id}",
            vpc=self.vpc,
            task_definition=self.container_nested_stack.task_definition,
            container=self.container_nested_stack.container,
            sg_efs_traffic=self.sg_efs_traffic,
        )
        self.ecs_asg_nested_stack = EcsAsg(
            self,
            description=f"Ec2Service Logic for {construct_id}",
            leaf_construct_id=construct_id,
            vpc=self.vpc,
            task_definition=self.container_nested_stack.task_definition,
            sg_container_traffic=self.sg_container_traffic,
            efs_file_system=self.volumes_nested_stack.efs_file_system,
        )

Thank you so much!

Cameronsplaze avatar Jul 06 '25 19:07 Cameronsplaze

Hi @Cameronsplaze. Thanks for opening this issue.

Is there a way I can remove the pre-existing /data container path before mounting the access_point, so the two don't conflict?

So seems like you don't care about the contents of the pre-existing directory in the container image. Is creating your own container image on top of the third-party image an option for you? In your image you could just delete that directory.

amogh09 avatar Jul 17 '25 23:07 amogh09

Hi @amogh09!

So seems like you don't care about the contents of the pre-existing directory in the container image

Correct. The directory is empty anyways. With docker's mount/volumes, it normally just mounts over. EFS task mounts not behaving like this is confusing to me.

Is creating your own container image on top of the third-party image an option for you?

The problem is I'd have to maintain a dockerfile for each/every image I want to support, and it'd be much harder for people to spin up their own stack if I haven't added that specific dockerfile yet.

If extra context helps, the project this is for is AWS-ContainerManager. It has different configs in ./Examples that have the image name. I want people to easily add configs for other containers not currently in there if needed.

I'd have to somehow generate the dockerfile automatically, build the container, then have that be what the task runs. Since I want the task to always run the latest image version, the only way that might work would be to have the ec2 generate and build the dockerfile in the user_data, and somehow have the task start that afterwards. Even if it worked, it feels very hacky, and would increase the task startup time drastically I feel.

Cameronsplaze avatar Jul 18 '25 18:07 Cameronsplaze

An easier way of saying that would have been just saying the image uri is "user input", my bad.

Thinking this over more, generating the Dockerfile is easy. I can build it in the ec2's user_data to keep things up to date. I'd have to build over the existing image that the task is about to run, but that might let it "just work (tm)".

Can I ask if the original error is expected then? Normally you're able to mount over existing directories, and this one is even empty. I've only ever seen containers that expect a mount also have a pre-existing directory there, so they work if you don't mount a volume in. It just caught me off guard. Above might work, but it still feels more fragile than I'd like.

Cameronsplaze avatar Jul 30 '25 18:07 Cameronsplaze