Problem with qubes-setup-dnat-to-ns
Qubes OS release
4.3
Brief summary
I think it is the offending commit https://github.com/QubesOS/qubes-core-agent-linux/commit/2256411aeea56acc3a60c43beb28f3f1fc41e879
it runs normally on "stock" sys-* qubes, but I have debian-minimal based setup and it is completely broken there (both 12 and 13 versions). dnat-ns entries are not created and if I run this script manually I get weird "add table ip qubes -- No such file or directory" errors.
Steps to reproduce
Apply qubes-core-agent-networking upgrade (in my case it was 4.3.34->4.3.37) on debian-minimal based networking cube Restart network qube
Expected behavior
DNS works as expected
Actual behavior
DNS is available on the networking qube, but not on client qubes. nft list table ip qubes shows that chain dnat-dns does not exist. Running the script from command line leads to an error: Error: No such file or directory list chain ip qubes dnat-dns ^^^^^^^^^^^^ Error: No such file or directory add table ip qubes ^^^^^^^^^^^ Error: No such file or directory add table ip qubes ^^^^^^^^^^^ If I create dnat-dns chain manually, the first error goes away but it remains to be empty.
Additional information
I have two versions of minimal network qube, one is with network-manager and one without for wired connection only, and both are broken. Happy to help with further diagnosis, i ran the script with trace but without agruments and variables it is not very informative.
Well, none of two have systemd-resolved, so probably it is the resolv.conf fallback malfunctioning. I tried manually removing ipv6 entry, but it did not help either.
Or could be someplace else! I checked the suspicious commit and now I am unsure it can affect the script behavior this way, but it was the only one that touches it.
Interesting, interesting.
I confirm exactly this patch breaks things. Before patch: conditions: there is no dnat-dns chain in qubes table, first run of the script: emits this error: Error: No such file or directory list chain ip qubes dnat-dns ^^^^^^^ but the chain and all the entries are created correctly. on subsequent runs there is no error. After patch: all errors mentioned above, chain is not created, entries are not populated. If i create the chain manually, it does not help, entries still are not populated.
I've also been seeing this on my sys-net, sys-firewall, and sys-vpn qubes (journalctl -b), all based on Debian 12 or 13 minimal:
<timestamp> <vm> network-proxy-setup.sh[586]: Error: No such file or directory
<timestamp> <vm> network-proxy-setup.sh[586]: list chain ip qubes dnat-dns
<timestamp> <vm> network-proxy-setup.sh[586]: ^^^^^^^^
But ultimately post-boot dnat-dns does get created so I hadn't yet investigated the cause.
But ultimately post-boot
dnat-dnsdoes get created
Does your minimal qube have systemd-resolved? Mine does not and I think it triggers the issue.
None of these qubes have systemd-resolved installed. But neither do debian-12-xfce or debian-13-xfce, so probably that is not the issue?
Hm hm interesting. However, something is still not right, and it is triggered exactly by this tiny patch (i imported the old script to the new system, and it works). The only other obvious thing that is missing in my system is dbus-x11, and i doubt it really does matter.
If it really is caused by that commit then maybe @HW42 has some thoughts.
I tried to reproduce the reported problem, but unfortunately failed (so far).
I installed a fresh debian-13-minimal version 0:4.3.0-202510232142. Enabled the testing repo. Installed updates and qubes-core-agent-networking. No further customization.
Based on that template I created a VM with provides_network set and an assigned netvm (like sys-firewall). (Separately also tried no netvm and PCIe passthrough, like sys-net).
As expected I get the mentioned error once from the check if it needs to update the rules (that should probably be silenced). But then the chain as well as the dns-addrset is created as expected.
$ dpkg -l qubes-core-agent\*
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-================================-================-============-=================================
ii qubes-core-agent 4.3.37-1+deb13u1 amd64 Qubes core agent
un qubes-core-agent-linux <none> <none> (no description available)
un qubes-core-agent-nautilus <none> <none> (no description available)
un qubes-core-agent-network-manager <none> <none> (no description available)
ii qubes-core-agent-networking 4.3.37-1+deb13u1 amd64 Networking support for Qubes VM
un qubes-core-agent-qrexec <none> <none> (no description available)
$ nft list ruleset
table ip qubes {
set downstream {
type ipv4_addr
}
set allowed {
type ifname . ipv4_addr
}
chain prerouting {
type filter hook prerouting priority raw; policy accept;
iifgroup 2 goto antispoof
ip saddr @downstream counter packets 0 bytes 0 drop
}
chain antispoof {
iifname . ip saddr @allowed accept
counter packets 0 bytes 0 drop
}
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifgroup 2 accept
oif "lo" accept
masquerade
}
chain input {
type filter hook input priority filter; policy drop;
jump custom-input
ct state invalid counter packets 0 bytes 0 drop
iifgroup 2 udp dport 68 counter packets 0 bytes 0 drop
ct state established,related accept
iifgroup 2 meta l4proto icmp accept
iif "lo" accept
iifgroup 2 counter packets 0 bytes 0 reject with icmp host-prohibited
counter packets 0 bytes 0
}
chain forward {
type filter hook forward priority filter; policy accept;
jump custom-forward
ct state invalid counter packets 0 bytes 0 drop
ct state established,related accept
oifgroup 2 counter packets 0 bytes 0 drop
}
chain custom-input {
}
chain custom-forward {
}
chain dnat-dns {
type nat hook prerouting priority dstnat; policy accept;
ip daddr 10.139.1.1 udp dport 53 dnat to 10.139.1.1
ip daddr 10.139.1.1 tcp dport 53 dnat to 10.139.1.1
ip daddr 10.139.1.2 udp dport 53 dnat to 10.139.1.2
ip daddr 10.139.1.2 tcp dport 53 dnat to 10.139.1.2
}
}
table ip6 qubes {
set downstream {
type ipv6_addr
}
set allowed {
type ifname . ipv6_addr
}
chain antispoof {
iifname . ip6 saddr @allowed accept
counter packets 0 bytes 0 drop
}
chain prerouting {
type filter hook prerouting priority raw; policy accept;
iifgroup 2 goto antispoof
ip6 saddr @downstream counter packets 0 bytes 0 drop
}
chain postrouting {
type nat hook postrouting priority srcnat; policy accept;
oifgroup 2 accept
oif "lo" accept
masquerade
}
chain _icmpv6 {
meta l4proto != ipv6-icmp counter packets 0 bytes 0 reject with icmpv6 admin-prohibited
icmpv6 type { nd-router-advert, nd-redirect } counter packets 0 bytes 0 drop
accept
}
chain input {
type filter hook input priority filter; policy drop;
jump custom-input
ct state invalid counter packets 0 bytes 0 drop
ct state established,related accept
iifgroup 2 goto _icmpv6
iif "lo" accept
ip6 saddr fe80::/64 ip6 daddr fe80::/64 udp dport 546 accept
meta l4proto ipv6-icmp accept
counter packets 0 bytes 0
}
chain forward {
type filter hook forward priority filter; policy accept;
jump custom-forward
ct state invalid counter packets 0 bytes 0 drop
ct state established,related accept
oifgroup 2 counter packets 0 bytes 0 drop
}
chain custom-input {
}
chain custom-forward {
}
}
table ip qubes-firewall {
set dns-addr {
type ipv4_addr
elements = { 10.139.1.1, 10.139.1.2 }
}
chain qubes-forward {
}
chain forward {
type filter hook forward priority filter; policy drop;
ct state established,related accept
iifname != "vif*" accept
jump qubes-forward
}
chain prerouting {
type filter hook prerouting priority raw; policy accept;
}
chain postrouting {
type filter hook postrouting priority raw; policy accept;
}
}
table ip6 qubes-firewall {
set dns-addr {
type ipv6_addr
}
chain qubes-forward {
}
chain forward {
type filter hook forward priority filter; policy drop;
ct state established,related accept
iifname != "vif*" accept
jump qubes-forward
}
chain prerouting {
type filter hook prerouting priority raw; policy accept;
}
chain postrouting {
type filter hook postrouting priority raw; policy accept;
}
}
$
Can you post the output from nft list ruleset and from cat /etc/resolv.conf after it fails?
I tried to check a few more things, found nothing. I have readonly root. With rw root, there is no difference. Are you sure you do not have systemd-resolved? My resolv.conf is very simple:
# Generated by NetworkManager
search v6.vivacom.bg
nameserver 212.39.90.52
nameserver 212.39.90.53
nameserver fe80::1%wls6
The ruleset looks normal otherwise: https://pastebin.com/A5Dpcfkc
Here is my package list just in case: https://pastebin.com/gjm7Z8Tr
Are you sure you do not have systemd-resolved?
Yes. (See description above)
The ruleset looks normal otherwise: [...]
No, the qubes-firewall tables are missing, but it's enable in qvm-services (the script checks for /run/qubes-service/qubes-firewall).
@marmarek: What do you think?
- Not supported
- Create empty nft "table".
- Check for table existence?
Indeed, when I created all the regular qubes-firewall tables the system went back to normal. The service was disabled (no idea why, it is some legacy code from abarinov@ )
It's weird ending up in this situation. But thinking about it, maybe it should be handled by notifying qubes-firewall service (a signal? reload? restart?) instead of adjusting firewall directly? This way, DNS names would be re-resolved again, which IMO makes sense if you change DNS servers.