yum-s3-iam
yum-s3-iam copied to clipboard
Using the s3 plugin with DNF
I am able to successfully use the plugin on a CentOS 7 based ec2 instance. But I am struggling to use it on my Fedora 28 laptop.
The current RPM installs the plugin and config in yum related directories.
Am I correct in assuming that the config should go into /etc/dnf/plugins/
and the plugin itself should go into /usr/lib/python2.7/site-packages/dnf-plugins
?
On my Fedora28 laptop, the current plugins are all Python3. Would the s3 plugin work in python3 or 2.7 ? Thanks in advance
The underlying Plugin interface between Yum and Dnf is rather large and more than just "migrate to python3 syntax", so I don't expect the code in its current state to work.
I would like to see this feature though (May even pick up implementing it if I find the time). Debian/Ubuntu have their apt-transport-s3
in all of their supported major versions, so having this gap in DNF systems would bring EL8 (and the fedoras) up to parity in terms of using private S3 buckets as a repo.
I had a look at writing a dnf plugin. The API has a a lot of similarities with yum's. You can set request headers for a repository, but unfortunately that's not enough.
https://dnf.readthedocs.io/en/latest/api_repos.html?highlight=repo#dnf.repo.Repo.set_http_headers
To make a GET request to S3, you have to do some complicated stuff to calculate a signature and one of the inputs is the request path. There's no way of knowing what path dnf needs to grab unless you can intercept each request and calculate the headers each time.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/RESTAuthentication.html
Yum's YumRepository
class exposes a method that returns a URLGrabber
interface. So if you write your own class with YumRepository
as its base and override the grab
method, you can use your own URLGrabber
and intercept requests. That's what this plugin does.
https://github.com/seporaitis/yum-s3-iam/blob/master/s3iam.py#L181-L196
DNF's API has nothing like this as far as I can see. In the changelogs I spotted that they've explicitly dropped URLGrabber
.
https://dnf.readthedocs.io/en/latest/api_repos.html?highlight=repo#dnf.repo.Repo
I think the only way to do this with DNF would be to write an HTTP proxy that either goes to the metadata API on the local machine, or gets passed the IAM credentials via headers by the DNF client injected by a plugin.
Here's some scaffolding for a DNF plugin if anybody wants to give it a go:
#!/usr/bin/env python
import dnf
class BucketIAMPlugin(dnf.Plugin):
name = "bucketiam"
def __init__(self, base, cli):
super(BucketIAMPlugin, self).__init__(base, cli)
def config(self):
conf = self.read_config(self.base.conf)
for repo in self.base.repos.all():
repo.set_http_headers(("Authorization", ""))
One more thing. There's a urlopen
method in the dnf.Base
class. I tried monkey patching it but couldn't get it to be called, and even if it worked, it would intercept all requests so it would be unideal. It would also conflict with other plugins that might do this.
Possibly related, I was poking around the dnf repos and noticed this issue, looking to replace librepo
with powerloader
:
- https://github.com/rpm-software-management/libdnf/issues/1452
And powerloader claims native S3 support...
I've found an existing proxy seemingly built by Amazon themselves that can create SigV4 headers. I haven't tried it but see no reason this shouldn't work.
https://github.com/awslabs/aws-sigv4-proxy
Proxy works. Leaving notes here for anybody who might want them.
- Start it up
$ docker run --rm -d -p 8080:8080 public.ecr.aws/aws-observability/aws-sigv4-proxy
- Create a new
.repo
file in/etc/yum.repos.d
[<REPO NAME>]
name = <REPO NAME>
baseurl = http://s3.<BUCKET REGION>.amazonaws.com/<BUCKET NAME>/<REPO NAME>
proxy = http://localhost:8080
where <REPO NAME>
would be the name of a DNF repository, like baseos
or appstream
. Should be a directory in your bucket. Note that although the baseurl
is a HTTP URL, the proxy makes a HTTPS connection to S3.
-
Enjoy
-
Consider enabling GPG checking, for example
[<REPO NAME>]
...
+ gpgcheck = 1
+ gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Rocky-9
Proxy works. Leaving notes here for anybody who might want them.
- Start it up
$ docker run --rm -d -p 8080:8080 public.ecr.aws/aws-observability/aws-sigv4-proxy
- Create a new
.repo
file in/etc/yum.repos.d
[<REPO NAME>] name = <REPO NAME> baseurl = http://s3.<BUCKET REGION>.amazonaws.com/<BUCKET NAME>/<REPO NAME> proxy = http://localhost:8080
where
<REPO NAME>
would be the name of a DNF repository, likebaseos
orappstream
. Should be a directory in your bucket. Note that although thebaseurl
is a HTTP URL, the proxy makes a HTTPS connection to S3.
- Enjoy
- Consider enabling GPG checking, for example
[<REPO NAME>] ... + gpgcheck = 1 + gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-Rocky-9
Hi I tried to use this approach but i'm receiving 502 error like:
Errors during downloading metadata for repository 'test':
- Status code: 502 for http://xxxxxxxxx.s3.eu-west-1.amazonaws.com/releases/repodata/repomd.xml (IP: 127.0.0.1) Error: Failed to download metadata for repo 'test': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Even i am getting 502 like grzegorz-gn. Any resolutions?
for anybody who stumbles into the 502 problem, we were able to solve it using following steps,
- Change docker command as follows
docker run --rm -d -e 'AWS_ACCESS_KEY_ID=<AWS KEYS>' \
-e 'AWS_SECRET_ACCESS_KEY=<AWS ACCESS KEY>' \
-e 'AWS_SESSION_TOKEN=<AWS SESSION TOKEN>' \
-p 8921:8080 public.ecr.aws/aws-observability/aws-sigv4-proxy \
--verbose \
--log-failed-requests \
--log-signing-process \
--no-verify-ssl \
--name s3 \
--host s3.amazonaws.com \
--region us-east-1 \
--sign-host s3.amazonaws.com
as per documentation of aws-sigv4-proxy, It is required to pass 'host header' along with the request. 502 error is due to the default behavior of aws-sigv4-proxy of not passing 'host header'. we had to add host and sign-host configs along with docker parameters while starting the container to get it to work.
- We had to change conf file as follows
[s3-noarch]
name=S3 DNF repo
baseurl=http://localhost:8921/<BUCKET_NAME>/<PATH TO YUM REPO>
enabled=1
Hope it helps.