uyuni
uyuni copied to clipboard
Rocky Linux 8 - dnf update fails with error: Status code: 403 for repomd.xml
Problem description
My setup is as follows:
- Uyuni release [2023.10] serving the common channel rockylinux8-x86_64
- Freshly setup Rocky Linux 8.8 Client
- bootstrapping via Uyuni Gui worked without any problems.
- "dnf repolist" on the client shows all channels correctly
When I try to "dnf update", I get the following message back:
[root@client ~]# dnf update
Rocky Linux 8 (x86_64) 44 kB/s | 3.9 kB 00:00
Errors during downloading metadata for repository 'susemanager:rockylinux8-x86_64':
- Status code: 403 for https://uyuni-server:443/rhn/manager/download/rockylinux8-x86_64/repodata/repomd.xml (IP: 10.1.1.1)
Error: Failed to download metadata for repo 'susemanager:rockylinux8-x86_64': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
I had this problem a few months before, and this article helped me solve it: https://www.suse.com/de-de/support/kb/doc/?id=000019499 Basically you have to regenerate the cache by doing this steps on the uyuni server:
- remove all files in /var/cache/rhn/repodata/rockylinux8-x86_64
- run the command "spacecmd softwarechannel_regenerateyumcache rockylinux8-x86_64"
However, this time that approach didn't work. I verified, that the susemanager_token works by taking the token from /etc/yum.repos.d/susemanager:channels.repo and using it with curl:
curl -H "X-Mgr-Auth: [eyJhbGciOiJI...snip...XgMV-vXDEkMo4tDoGY]" -k https://uyuni-server:443/rhn/manager/download/rockylinux8-x86_64/repodata/repomd.xml
Gives me the content of repomd.xml
<?xml version="1.0" encoding="UTF-8"?>
<repomd xmlns="http://linux.duke.edu/metadata/repo"><data type="primary">location href="repodata/71f62d6dadfbf3238ce701da43cb69958ce4c546cc370f92e70ba933f3193c23-comps.xml"/><checksum type="sha256">71f62d6dadfbf3238ce701da43cb69958ce4c546cc370f92e70ba933f3193c23</checksum>
... snip ...
<timestamp>1700709915</timestamp></data></repomd>
In the apache access_log these entries show up: Request done with curl (Code 200)
10.1.1.1 - - [23/Nov/2023:17:08:40 +0100] "GET /rhn/manager/download/rockylinux8-x86_64/repodata/repomd.xml HTTP/1.1" 200 2248
Request done with dnf (Code 403)
10.1.1.1 - - [23/Nov/2023:17:09:08 +0100] "GET /rhn/manager/download/rockylinux8-x86_64/repodata/repomd.xml HTTP/1.1" 403 3971
Any ideas why dnf/yum is failing? Any advice appreciated.
Best regards Rene
Steps to reproduce
- Create Rocky Linux 8 Common repository (spacewalk-common-channels -a x86_64 rockylinux8)
- Bootstrap a Client, setup from USB Stick
- run dnf update
Uyuni version
Loading repository data...
Reading installed packages...
Information for package Uyuni-Server-release:
---------------------------------------------
Repository : Uyuni Server Stable
Name : Uyuni-Server-release
Version : 2023.10-230900.209.1.uyuni3
Arch : x86_64
Vendor : obs://build.opensuse.org/systemsmanagement:Uyuni
Support Level : Level 3
Installed Size : 1.4 KiB
Installed : Yes
Status : up-to-date
Source package : Uyuni-Server-release-2023.10-230900.209.1.uyuni3.src
Summary : Uyuni Server
Description :
Uyuni lets you efficiently manage physical, virtual,
and cloud-based Linux systems. It provides automated and cost-effective
configuration and software management, asset management, and system
provisioning.
Uyuni proxy version (if used)
No response
Useful logs
/var/log/rhn/rhn_web_ui reports this lines:
2023-11-23 17:25:18,148 [ajp-nio-0:0:0:0:0:0:0:1-8009-exec-4] INFO com.suse.manager.webui.controllers.DownloadController - Forbidden: invalid token eyJhbGciOiJI...snip...XgMV-vXDEkMo4tDoGY to access repomd.xml
But that´s exactly the same token, that works for curl
Additional information
No response
Hey @babelr thanks for the report. Are you using the traditional stack instead of salt?
Hey @babelr thanks for the report. Are you using the traditional stack instead of salt?
Hi @avshiliaev I registered the client using the bootstrap feature on the uyuni gui, which installs the salt client. I did not use the legacy rhn tools to register the client. However, I still use some legacy config channels to deploy config files to the clients.
@babelr can you perform the action from the server though, install and upgrade packages? If not, do you have any errors in the logs under /var/log/rhn/reposync
?
Please also take a look at the discussion here
- https://github.com/uyuni-project/uyuni/issues/6820
I ran into this after upgrading from Rocky Linux 8 to 9 on a couple hosts with server 2024.05. I get the same 403 forbidden in the failed event on the server so @mcalmer last update in #6820 is incorrect. What's strange is dnf commands worked for a little while after the upgrade. crypto policies are set to default also. I used Uyuni to change software channels and re-disabled all the Rocky repos leaving only the susemanager repo file with enabled repos after the upgrade.
I'll also confirm that the repo getting the failure seems to be the first one tried and if you disable that one the next first repo also has the error and nothing in the logs in /var/log/rhn/reposync
have any errors.
This seems to be not reposync, but a client want to download a package from Uyuni. Check /var/log/rhn/rhn_web_ui.log
2024-05-17 11:28:49,660 [ajp-nio-127.0.0.1-8009-exec-8] INFO com.suse.manager.webui.controllers.DownloadController - Forbidden: You need a token to access /manager/download/rockylinux9-x86_64-extras/repodata/repomd.xml
I was able to disable auth tokens as described in #6820 and then the clients started working. What would cause the auth tokens to start failing after an upgrade?
Did you set this i /etc/rhn/rhn.conf or somewhere in /usr ? Check if the setting is still there
I set in /etc/rhn/rhn.conf, it's still there as I just set it Friday so I could continue working with updates.
java.salt_check_download_tokens = 0
The rest of the salt stack as far as config files and states seemed to be working though, it was just repo access that seemed to break.
/etc/rhn/rhn.conf
is marked as config(noreplace) .
An rpm update should not touch it. No idea how it got lost.
The error "you need a token ..." sounds like no token was send by dnf. We had the situation that one of the plugins provided by 3rd parties were replacing HTTP Headers instead of just adding the header fields they need. With this they were removing the token from the header.
Maybe you can check what plugins you are using and if one mess around with header fields.
I don't have any DNF plugins I'm aware of. Also SUSE and RHEL systems still work. Maybe a bug with EL9 packages? I can open a separate bug if needed. I'm also still running into issues with forcing GPG on my repos #8563 but don't think that's related.
rpm -qa|grep yum
yum-utils-4.3.0-13.el9.noarch
yum-4.14.0-9.el9.noarch
rpm -qa|grep dnf
libdnf-0.69.0-8.el9.i686
python3-libdnf-0.69.0-8.el9.x86_64
python3-dnf-4.14.0-9.el9.noarch
libdnf-plugin-subscription-manager-1.29.40-1.el9.rocky.0.1.x86_64
dnf-4.14.0-9.el9.noarch
dnf-plugins-core-4.3.0-13.el9.noarch
dnf-data-4.14.0-9.el9.noarch
kpatch-dnf-0.9.7_0.4-2.el9.noarch
libdnf-0.69.0-8.el9.x86_64
python3-dnf-plugins-core-4.3.0-13.el9.noarch
less /etc/yum/pluginconf.d/
copr.conf debuginfo-install.conf langpacks.conf.rpmsave subscription-manager.conf venv-dnfnotify.conf
copr.d/ kpatch.conf product-id.conf susemanagerplugin.conf
Just found this in a dnf log on a client.
Unknown configuration option: susemanager_token = eyJhbGciOiJIUzI1NiJ9.eyJleHAiOjE3NDc1MDYxMzksImlhdCI6MTcxNTk3MDEzOSwibmJmIjoxNzE1OTcwMDE5LCJqdGkiOiJZd0tlMkhtVzlvOVVDYTY2YnR6UlF3Iiwib3JnIjoxLCJvbmx5Q2hhbm5lbHMiOlsicmw5LXN1c2UtbWFuYWdlci10b29scy14ODZfNjQiXX0.AbppEhATFzKHv_IvHUjXh4dDf7G9OklfPFhz1PX_2cQ in /etc/yum.repos.d/susemanager:channels.repo
Ah ha looks like I hit what was reported in #6820 susemanagerplugin.py is missing in python3.9 and I am using venv-salt.
[root@foobar ~]# systemctl status venv-salt-minion.service
● venv-salt-minion.service - The venvjailed Salt Minion
Loaded: loaded (/usr/lib/systemd/system/venv-salt-minion.service; enabled; preset: disabled)
Drop-In: /etc/systemd/system/venv-salt-minion.service.d
└─TMPDIR.conf
Active: active (running) since Tue 2024-05-21 14:19:20 CDT; 18h ago
Main PID: 797 (python.original)
Tasks: 6 (limit: 100036)
Memory: 325.4M
CPU: 1h 35min 19.867s
CGroup: /system.slice/venv-salt-minion.service
├─ 797 /usr/lib/venv-salt-minion/bin/python.original /usr/lib/venv-salt-minion/bin/salt-minion
├─ 1414 /usr/lib/venv-salt-minion/bin/python.original /usr/lib/venv-salt-minion/bin/salt-minion
└─17691 /usr/bin/python3.9 /usr/bin/dnf -q needs-restarting -r
[root@foobar ~]# ll /usr/lib/python3.6/site-packages/dnf-plugins/
total 4
drwxr-xr-x. 2 root root 46 May 17 13:34 __pycache__
-rw-r--r--. 1 root root 1186 Feb 2 08:10 susemanagerplugin.py
[root@foobar ~]# ll /usr/lib/python3.9/site-packages/dnf-plugins/
total 264
-rw-r--r--. 1 root root 9346 Sep 9 2022 builddep.py
-rw-r--r--. 1 root root 4967 Sep 9 2022 changelog.py
-rw-r--r--. 1 root root 10885 Sep 9 2022 config_manager.py
-rw-r--r--. 1 root root 30298 Sep 9 2022 copr.py
-rw-r--r--. 1 root root 11084 Sep 9 2022 debuginfo-install.py
-rw-r--r--. 1 root root 12558 Sep 9 2022 debug.py
-rw-r--r--. 1 root root 12330 Sep 9 2022 download.py
-rw-r--r--. 1 root root 3948 Sep 9 2022 generate_completion_cache.py
-rw-r--r--. 1 root root 13532 Sep 9 2022 groups_manager.py
-rw-r--r--. 1 root root 8318 Apr 20 2023 kpatch.py
-rw-r--r--. 1 root root 11251 Apr 18 00:51 needs_restarting.py
-rw-r--r--. 1 root root 9087 Apr 20 03:28 product-id.py
drwxr-xr-x. 2 root root 4096 May 17 13:32 __pycache__
-rw-r--r--. 1 root root 7052 Sep 9 2022 repoclosure.py
-rw-r--r--. 1 root root 11475 Sep 9 2022 repodiff.py
-rw-r--r--. 1 root root 4092 Sep 9 2022 repograph.py
-rw-r--r--. 1 root root 10570 Sep 9 2022 repomanage.py
-rw-r--r--. 1 root root 14648 Sep 9 2022 reposync.py
-rw-r--r--. 1 root root 7697 Apr 20 03:28 subscription-manager.py
-rw-r--r--. 1 root root 27521 Apr 18 00:51 system_upgrade.py
-rw-r--r--. 1 root root 2096 Apr 20 03:28 upload-profile.py
-rw-r--r--. 1 root root 1781 May 11 06:13 venv-dnfnotify.py
#4545 might be related to this I find 2 instances of susemanagerplugin.py on my RL9 systems. RHEL8 systems only has the one in the python3.6 directory.
[root@foobar ~]# find /usr -name susemanagerplugin.py
/usr/lib/python3.6/site-packages/dnf-plugins/susemanagerplugin.py
/usr/share/yum-plugins/susemanagerplugin.py