salt icon indicating copy to clipboard operation
salt copied to clipboard

[BUG] archive.extracted doesn't use pillar data for s3.key and s3.keyid

Open lmf-mx opened this issue 3 years ago • 2 comments

Description This is the same behavior as #13850 except when the caching is ran from archive.extracted. If the s3.key and s3.keyid pillars are not set in the minion config files, then an attempt to grab IAM roles is made, follow by an exception.

Could not fetch from s3://<path>/file.tar. Exception: Failed to get file. InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.

If the same values that are assigned to the pillars from the master are directly added as local config on the minion, the caching succeeds. #28630 contained a fix for some other modules in https://github.com/saltstack/salt/pull/28630/commits/4d38687ed6ccdd999e543c96993cabcdd3207920

Running modules on the minion using the pillar data works, such as s3.head <bucket> <file.tar>.

Setup

unpack_cb_installer:
  archive.extracted:
    - name: /tmp/cbinstaller
    - source: s3://<path>/file.tar
    - source_hash: <hash>
    - unless:
      - fun: pkg.version
        args:
          - <pkgname>

Steps to Reproduce the behavior Running salt 'minion' state.sls_id unpack_cb_installer cb

----------
          ID: unpack_cb_installer
    Function: archive.extracted
        Name: /tmp/cbinstaller
      Result: False
     Comment: Could not fetch from s3://<path>/file.tar. Exception: Failed to get file. InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
     Started: 12:42:18.517717
    Duration: 8701.662 ms
     Changes:
----------

Expected behavior Pillar data provided by master should be used for archive.extracted when using s3 as a source.

Versions Report

salt --versions-report (Provided by running salt --versions-report. Please also mention any differences in master/minion versions.)
minion# salt-call --versions-report
Salt Version:
          Salt: 3003

Dependency Versions:
          cffi: Not Installed
      cherrypy: Not Installed
      dateutil: Not Installed
     docker-py: Not Installed
         gitdb: Not Installed
     gitpython: Not Installed
        Jinja2: 2.11.1
       libgit2: Not Installed
      M2Crypto: 0.35.2
          Mako: Not Installed
       msgpack: 0.6.2
  msgpack-pure: Not Installed
  mysql-python: Not Installed
     pycparser: Not Installed
      pycrypto: Not Installed
  pycryptodome: Not Installed
        pygit2: Not Installed
        Python: 3.6.8 (default, Nov 16 2020, 16:55:22)
  python-gnupg: Not Installed
        PyYAML: 3.13
         PyZMQ: 17.0.0
         smmap: Not Installed
       timelib: Not Installed
       Tornado: 4.5.3
           ZMQ: 4.1.4

System Versions:
          dist: centos 7 Core
        locale: UTF-8
       machine: x86_64
       release: 3.10.0-1160.25.1.el7.x86_64
        system: Linux
       version: CentOS Linux 7 Core


master#: salt-call --versions-report
Salt Version:
          Salt: 3003

Dependency Versions:
          cffi: 1.14.5
      cherrypy: 8.9.1
      dateutil: 2.7.3
     docker-py: Not Installed
         gitdb: 2.0.6
     gitpython: 3.0.7
        Jinja2: 2.10.1
       libgit2: 0.28.4
      M2Crypto: 0.31.0
          Mako: Not Installed
       msgpack: 0.6.2
  msgpack-pure: Not Installed
  mysql-python: Not Installed
     pycparser: 2.20
      pycrypto: Not Installed
  pycryptodome: 3.6.1
        pygit2: 0.28.2
        Python: 3.8.10 (default, Jun  2 2021, 10:49:15)
  python-gnupg: 0.4.5
        PyYAML: 5.3.1
         PyZMQ: 18.1.1
         smmap: 2.0.5
       timelib: Not Installed
       Tornado: 4.5.3
           ZMQ: 4.3.2

System Versions:
          dist: ubuntu 20.04 focal
        locale: utf-8
       machine: x86_64
       release: 5.4.0-67-generic
        system: Linux
       version: Ubuntu 20.04 focal

lmf-mx avatar Jul 07 '21 18:07 lmf-mx

I tried using file.managed as a workaround. After expanding past my initial test minion, I've found that I must have had cached data and pulling the key/keyid from pillar for file.managed has the same issue.

lmf-mx avatar Jul 12 '21 20:07 lmf-mx

There seems to be some difference in specifying s3 credentials in the pillar when using the s3 module vs using a s3:// source in a file.managed or similar state.

The documentation in the file.managed suggests reading the s3 module for configuration, and that configuration says the pillar key should be the literal s3.keyid

But the fileclient.py actually changes this behavior and uses the pillar key s3:keyid

So while setting s3.keyid in the pillar will work when you try to use s3.get or whatever, it will fail for the file.managed states.

I'm not sure if this should be a documentation change or if they fileclient.py itself is wrong

The discrepancy is here https://github.com/keslerm/salt/blob/master/salt/fileclient.py#L559-L564

keslerm avatar Jul 28 '22 14:07 keslerm