Debian 13 Freeze Dates Announced; add Trixie to approved OSes
Hello folks, The Debian 13 freeze dates were recently announced: see the announcement and the policy. In short, after the toolchain freeze, on March 15th, no significant library changes will occur; by May 15th, the trixie release will be impending, and all changes will be limited to specific bugfixes. In light of that, support for Trixie will be needed in the near future, and problems that occur on today's installs of Trixie wil likely occur on ones after the release.
I'm going to be setting up a Supervised install on a Debian 13 box (which will need some unusual configurations, such as hosting a wifi network only for devices, without bridging to a larger secured LAN network). I'd encourage the team to add Trixie to the list of supported OSes in the near future, perhaps with the note that bugs are likely. If I find any issues that I can confirm relate to Debian 13, I will be reporting and patching them; I hope these issues and patches are accepted, even if I am acting a bit early.
Thanks for stepping up looking into this!
I hope these issues and patches are accepted, even if I am acting a bit early.
If they don't break new Debian 12 and/or existing installations I don't think it is a problem to accept those already now.
Would you prefer separate pull requests for different problems, or one big pull request?
Personally I would say separate pull requests would be best
Update: Docker isn't yet distributing packages marked for Trixie (problematic), and it seems this issue interferes with the Debian-packaged versions of docker. I'm going to continue investigating, but hopefully this bug will be fixed in the various upstreams soon.
Looks like the debian issue is fixed since docker.io 26.1.5+dfsg1-8 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1092165#56, however it is not yet in trixie / testing: https://tracker.debian.org/pkg/docker.io
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by adding a comment 👍 This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.
26.1.5+dfsg1-9 has landed in testing
26.1.5+dfsg1-9 has landed in testing
Yup, and Docker have also started building Bookworm images. Things have been running fine.
I should note I recently saw a NEWS entry about systemd-resolved dropping support for MDNS. While my install doesn't use MDNS, someone might want to look into that for the general installations.
Indeed, it seems mDNS got disabled:
root@debian13-supervised-test:~# resolvectl
Global
Protocols: +LLMNR -mDNS -DNSOverTLS DNSSEC=no/unsupported
resolv.conf mode: uplink`
The Debian bug #1098914 mainly points to the fact that it conflicts with avahi-daemon. The decision is to make avahi-daemon the default, hence mDNS is disabled in systemd-resolved by defualt. There were concerns about avahi-daemon's dated codebase and not well maintained state, so the decision has only been made for trixie. However, Supervisor does rely on systemd-resolved (more specifically the DNS plug-in uses it to resolve local mDNS announced hostnames).
So we definitely should opt-out of this new default, and continue to use systemd-resolved. It seems rather straight forward looking at https://sources.debian.org/src/systemd/257.6-1/debian/systemd-resolved.NEWS/. I've tested it here and indeed that makes mDNS come back as before 🎉
systemd (257.4-9) unstable; urgency=medium
* Starting with systemd/257.4-9 mDNS is disabled in systemd-resolved.
Users relying on mDNS for reachability of a machine MUST mask the
associated drop-in before upgrading, or the machine will become
unreachable upon package upgrade. To mask the drop-in, as root:
mkdir -p /etc/systemd/resolved.conf.d/
touch /etc/systemd/resolved.conf.d/00-disable-mdns.conf
-- Luca Boccassi <[email protected]> Wed, 02 Apr 2025 11:19:09 +0100
There hasn't been any activity on this issue recently. Due to the high number of incoming GitHub notifications, we have to clean some of the old issues, as many of them have already been resolved with the latest updates. Please make sure to update to the latest version and check if that solves the issue. Let us know if that works for you by adding a comment 👍 This issue has now been marked as stale and will be closed if no further activity occurs. Thank you for your contributions.