buildah icon indicating copy to clipboard operation
buildah copied to clipboard

Force rebuilding image fails when image already exists and the default policy is set to reject.

Open tazerdev opened this issue 3 years ago • 19 comments

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Attempting to use podman_image to force rebuild images 'force: true' results in a policy error when the default podman trust policy is set to reject and the image already exists. It does not generate an error if the image is not present.

Steps to reproduce the issue:

  1. Ensure the latest version of containers.podman is installed:
# /usr/bin/ansible-galaxy collection install containers.podman -p /usr/share/ansible/collections --force
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Downloading https://galaxy.ansible.com/download/containers-podman-1.9.4.tar.gz to /root/.ansible/tmp/ansible-local-1233626kir8_q47/tmpdwc63nme/containers-podman-1.9.4-mg8u1i32
Installing 'containers.podman:1.9.4' to '/usr/share/ansible/collections/ansible_collections/containers/podman'
containers.podman:1.9.4 was installed successfully
  1. As root, in a temporary directory, generate a Dockerfile with the following content:
FROM registry.access.redhat.com/ubi7/ubi:latest
CMD echo "Hello World!"
  1. In the same directory, create a simple ansible playbook:
---
- hosts: localhost
  connection: local

  tasks:
    - name: 'Building podman image: localhost/test'
      containers.podman.podman_image:
        name: localhost/test:latest
        path: '.'
        pull: false
        push: false
        state: build
        force: true
  1. Set podman image trust to accept by default:

podman image trust set -t accept default

  1. Ensure the image doesn't already exist:

podman rmi localhost/test

  1. Run the playbook twice and you'll see it completes successfully both times.

  2. Set podman image trust to reject by default:

podman image trust set -t reject default

  1. Ensure the image doesn't already exist:

podman rmi localhost/test

  1. Run the playbook twice and the first run succeeds while the second fails.
# ansible-playbook main.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit
localhost does not match 'all'

PLAY [localhost] ***********************************************************************************

TASK [Gathering Facts] *****************************************************************************
ok: [localhost]

TASK [Building podman image: localhost/test] *******************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to build image localhost/test:latest:  Error: error copying image \"eabfd5423d5ecfd3ae6ea11a73dcc88e7efc4f67d54cb955b7087a368102840b\": Source image rejected: Running image containers-storage:[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@eabfd5423d5ecfd3ae6ea11a73dcc88e7efc4f67d54cb955b7087a368102840b is rejected by policy.\n"}

PLAY RECAP *****************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0

Describe the results you received:

With the default policy set to accept, the initial run of the playbook creates the container and all subsequent runs complete successfully. With the default policy set to reject, the initial run of the playbook creates the container however all subsequent runs fail.

Describe the results you expected:

I expected both runs of the task to behave the same as when the podman's image trust policy is accept by default.

Additional information you deem important (e.g. issue happens only occasionally):

If this is due to a policy error then I would expect the initial creation of the image to fail as well as all rebuilds. I've not been able to find anything in the podman docs, or containers.podman docs, that explain how to allow this behavior.

What I want to do is restrict image pulls to signed images in Red Hat's registries, while allowing the (re)building of images locally. Both pulls and builds using ansible.

Output of podman version:

Client:       Podman Engine
Version:      4.1.1
API Version:  4.1.1
Go Version:   go1.17.7
Built:        Mon Jul 11 10:56:53 2022
OS/Arch:      linux/amd64

Output of podman info:

host:
  arch: amd64
  buildahVersion: 1.26.2
  cgroupControllers:
  - cpuset
  - cpu
  - cpuacct
  - blkio
  - memory
  - devices
  - freezer
  - net_cls
  - perf_event
  - net_prio
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.1.2-2.module+el8.6.0+15917+093ca6f8.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.2, commit: 8c4f33ac0dcf558874b453d5027028b18d1502db'
  cpuUtilization:
    idlePercent: 99.64
    systemPercent: 0.13
    userPercent: 0.24
  cpus: 4
  distribution:
    distribution: '"rhel"'
    version: "8.6"
  eventLogger: file
  hostname: testhost.local
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-372.19.1.el8_6.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 2033487872
  memTotal: 8140427264
  networkBackend: cni
  ociRuntime:
    name: runc
    package: runc-1.1.3-2.module+el8.6.0+15917+093ca6f8.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.1.3
      spec: 1.0.2-dev
      go: go1.17.7
      libseccomp: 2.5.2
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_AUDIT_WRITE,CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_K
ILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-2.module+el8.6.0+15917+093ca6f8.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 8586993664
  swapTotal: 8589930496
  uptime: 337h 50m 42.89s (Approximately 14.04 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 1
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 26830438400
  graphRootUsed: 6634835968
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 148
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.1.1
  Built: 1657551413
  BuiltTime: Mon Jul 11 10:56:53 2022
  GitCommit: ""
  GoVersion: go1.17.7
  Os: linux
  OsArch: linux/amd64
  Version: 4.1.1

Package info (e.g. output of rpm -q podman or apt list podman):

# rpm -q podman
podman-4.1.1-2.module+el8.6.0+15917+093ca6f8.x86_64

Policy info (e.g. output of podman image trust show):

TRANSPORT      NAME                        TYPE        ID                                        STORE
all            default                     reject                                                
repository     registry.access.redhat.com  signed      [email protected], [email protected]  https://access.redhat.com/webassets/docker/content/sigstore
repository     registry.redhat.io          signed      [email protected], [email protected]  https://registry.redhat.io/containers/sigstore
docker-daemon                              accept

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

I've tested with the latest version available to the distribution. I've reviewed the Podman Troubleshooting Guide but found nothing specific to this error or policies in general.

Additional environment details (AWS, VirtualBox, physical, etc.):

Red Hat Enterprise Linux 8.6, fully patched and updated with FIPS enabled. I'm using the container-tools:rhel8 stream which is the rolling release and as updated as Red Hat allows.

tazerdev avatar Aug 26 '22 15:08 tazerdev

@vrothberg PTAL

rhatdan avatar Aug 29 '22 09:08 rhatdan

Can you try rebuilding with podman build --pull=never?

vrothberg avatar Aug 29 '22 10:08 vrothberg

Here you go, with default policy set to accept everything is fine:

# podman image trust set -t accept default

# podman rmi localhost/test
Error: localhost/test: image not known

# podman build --pull=never -t localhost/test .
STEP 1/2: FROM registry.access.redhat.com/ubi7/ubi:latest
STEP 2/2: CMD echo "Hello World!"
COMMIT localhost/test
--> 9c648fb796e
Successfully tagged localhost/test:latest
9c648fb796ec07eec96904bcb22b79575bf1b6f45366523e5d036fb8ef457392

# podman build --pull=never -t localhost/test .
STEP 1/2: FROM registry.access.redhat.com/ubi7/ubi:latest
STEP 2/2: CMD echo "Hello World!"
--> Using cache 9c648fb796ec07eec96904bcb22b79575bf1b6f45366523e5d036fb8ef457392
COMMIT localhost/test
--> 9c648fb796e
Successfully tagged localhost/test:latest
9c648fb796ec07eec96904bcb22b79575bf1b6f45366523e5d036fb8ef457392

With default policy set to reject we see the same error I was receiving via the ansible module:

# podman image trust set -t reject default

# podman rmi localhost/test
Untagged: localhost/test:latest
Deleted: 9c648fb796ec07eec96904bcb22b79575bf1b6f45366523e5d036fb8ef457392

# podman build --pull=never -t localhost/test .
STEP 1/2: FROM registry.access.redhat.com/ubi7/ubi:latest
STEP 2/2: CMD echo "Hello World!"
COMMIT localhost/test
--> 4400bc04bd5
Successfully tagged localhost/test:latest
4400bc04bd5184a7e5060a03a2bdfd571f9b97784488573eb663672efcaf803a

# podman build --pull=never -t localhost/test .
STEP 1/2: FROM registry.access.redhat.com/ubi7/ubi:latest
STEP 2/2: CMD echo "Hello World!"
--> Using cache 4400bc04bd5184a7e5060a03a2bdfd571f9b97784488573eb663672efcaf803a
COMMIT localhost/test
Error: error copying image "4400bc04bd5184a7e5060a03a2bdfd571f9b97784488573eb663672efcaf803a": Source image rejected: Running image containers-storage:[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@4400bc04bd5184a7e5060a03a2bdfd571f9b97784488573eb663672efcaf803a is rejected by policy.

The --pull option doesn't seem to matter as without it I still get the same error:

# podman rmi localhost/test
Untagged: localhost/test:latest
Deleted: 4400bc04bd5184a7e5060a03a2bdfd571f9b97784488573eb663672efcaf803a

# podman build -t localhost/test .
STEP 1/2: FROM registry.access.redhat.com/ubi7/ubi:latest
STEP 2/2: CMD echo "Hello World!"
COMMIT localhost/test
--> fd051bd97cc
Successfully tagged localhost/test:latest
fd051bd97cc204e73ca3fa01459a5d9e02d11ce525a72ef433fb4bed5d8b9ad5

# podman build -t localhost/test .
STEP 1/2: FROM registry.access.redhat.com/ubi7/ubi:latest
STEP 2/2: CMD echo "Hello World!"
--> Using cache fd051bd97cc204e73ca3fa01459a5d9e02d11ce525a72ef433fb4bed5d8b9ad5
COMMIT localhost/test
Error: error copying image "fd051bd97cc204e73ca3fa01459a5d9e02d11ce525a72ef433fb4bed5d8b9ad5": Source image rejected: Running image containers-storage:[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@fd051bd97cc204e73ca3fa01459a5d9e02d11ce525a72ef433fb4bed5d8b9ad5 is rejected by policy.

So it seems based on this test it's not specifically related to the ansible collection. Does this issue need to be moved to the podman project?

tazerdev avatar Aug 30 '22 10:08 tazerdev

It has been transferred to Podman.

rhatdan avatar Aug 30 '22 10:08 rhatdan

Thanks! I am pulling in @mtrmac, the SME.

vrothberg avatar Aug 30 '22 10:08 vrothberg

Buildah’s commit logic explicitly overrides policy.json and allows reading from c/storage: https://github.com/containers/buildah/blob/5de65a079bff410700f55c25c613597e9caad7a7/commit.go#L282

The second build triggers Buildah’s cache reuse + tag logic , which does not have that explicit override: https://github.com/containers/buildah/blob/5de65a079bff410700f55c25c613597e9caad7a7/imagebuildah/stage_executor.go#L1634 .

So, this seems to be a bug in Buildah’s stageExecutor.tagExistingImage. (And aesthetically I’d prefer the two to share a bit more code, e.g. this is the only caller of util.GetPolicyContext in the whole Buildah codebase).

So, transferring to Buildah.

mtrmac avatar Aug 30 '22 19:08 mtrmac

This is a buildah issue, I'll open try opening a patch for this.

flouthoc avatar Aug 31 '22 12:08 flouthoc

A friendly reminder that this issue had no activity for 30 days.

github-actions[bot] avatar Oct 01 '22 00:10 github-actions[bot]

@flouthoc did you ever get a chance to look at this?

rhatdan avatar Oct 01 '22 09:10 rhatdan

A friendly reminder that this issue had no activity for 30 days.

github-actions[bot] avatar Nov 01 '22 00:11 github-actions[bot]

I'll check this

flouthoc avatar Nov 01 '22 12:11 flouthoc

A friendly reminder that this issue had no activity for 30 days.

github-actions[bot] avatar Dec 03 '22 00:12 github-actions[bot]

@flouthoc did you check?

rhatdan avatar Dec 05 '22 19:12 rhatdan

A friendly reminder that this issue had no activity for 30 days.

github-actions[bot] avatar Jan 06 '23 00:01 github-actions[bot]

Missing creating a PR for this, I'll take this.

flouthoc avatar Jan 07 '23 07:01 flouthoc

A friendly reminder that this issue had no activity for 30 days.

github-actions[bot] avatar Feb 07 '23 00:02 github-actions[bot]

@flouthoc any update?

rhatdan avatar Feb 08 '23 01:02 rhatdan

A friendly reminder that this issue had no activity for 30 days.

github-actions[bot] avatar Mar 12 '23 00:03 github-actions[bot]

A friendly reminder that this issue had no activity for 30 days.

github-actions[bot] avatar Apr 13 '23 00:04 github-actions[bot]