dnf-plugins-extras
dnf-plugins-extras copied to clipboard
snapper: Set userdata to important=yes
With snapper you can set --userdata important=yes when creating a snapshot which will make automatic cleanups more conservative by merit of supposedly being fewer and being counted separately. By default:
NUMBER_LIMIT="50"
NUMBER_LIMIT_IMPORTANT="10"
meaning that for snapshots marked for the "number" cleanup algorithm, it will keep the last 50 normal snapshots and the last 10 important snapshots. So even if the 10th oldest important snapshot is older than the 50th oldest snapshot, it gets preserved.
On openSUSE, zypper transactions are marked important, but snapper-boot.timer snapshots and YaST commands (which create pre-post snapshots) are not. It would seem prudent to mimic them in this, and mark DNF snapshots important. Even if we don't have YaST, we do have snapper-boot.timer and we do have snapper create --command which can wrap any command in a pre-post snapshot. It would be useful for DNF upgrades to be preserved more conservatively than those uses.
I'm not clear on what happens to "important" snapshots that are older than NUMBER_LIMIT_IMPORTANT but younger than NUMBER_LIMIT; do they get deleted before unimportant snapshots or do they get demoted to unimportant and kept until that limit is reached? Either way the idea seems to be that important snapshots are much rarer, and so take longer to reach the limits. That's understandable on openSUSE which wraps every YaST command in a pre-post snapshot, but it's less likely to be the case on a Fedora system unless you use snapper create --cleanup-algorithm number --command {cmd} liberally. But only by marking DNF snapshots important do we enable this use case, so it could be worth it. You can also adjust NUMBER_LIMIT_IMPORTANT if you find that only keeping 10 as opposed to 50 DNF snapshots isn't enough for you. The main point is that they're kept up to that limit regardless of how many other "number" snapshots you create for other purposes.
Oh and apparently openSUSE changes the default limits:
NUMBER_LIMIT="2-10"
NUMBER_LIMIT_IMPORTANT="4-10"
So they actually do keep more important snapshots than normal ones, when constrained by quotas. This range syntax min-max implies that min snapshots will be kept, but up to max if there is enough space according to SPACE_LIMIT.
This of course is the wrong place to discuss possible changes to the snapper default config template, but I thought it might be relevant to the consideration of whether to mark DNF snapshots important=yes.
Actually looking at the sources for the zypper snapper plugin, it seems they define importance in a configuration file based on pattern matching against affected packages:
<?xml version="1.0" encoding="utf-8"?>
<snapper-zypp-plugin-conf>
<!-- List of solvables. If the zypp commit transaction includes any solvable
listed below, snapper will create pre and post snapshots. The match
attribute defines whether the pattern is a Unix shell-style wildcard
("w") or a Python regular expression ("re"). If any of the matching
solvables is marked as important, the snapshots are also marked as
important. -->
<solvables>
<solvable match="w" important="true">kernel-*</solvable>
<solvable match="w" important="true">dracut</solvable>
<solvable match="w" important="true">glibc</solvable>
<solvable match="w" important="true">systemd*</solvable>
<solvable match="w" important="true">udev</solvable>
<solvable match="w">*</solvable>
</solvables>
</snapper-zypp-plugin-conf>
When I tried a zypper install epiphany for testing purposes, it resulted in a pre-post snapshot marked important=no.
Maybe we don't need all of this feature copied, but it might be nice to have some distinction of importance between snapshots.
How about an /etc/dnf/plugins/snapper.conf where sections are provides wildcards and options are passed directly as the userdata dict to snapper? Then we translate the above thus:
[kernel-*]
important=yes
[dracut]
important=yes
[glibc]
important=yes
[systemd*]
important=yes
[udev]
important=yes
[*]
important=no
I could probably code this up myself, but I don't really know how you're supposed to actually replace the system dnf plugin code and test it... Deleting the __pycache__ and editing the installed snapper.py doesn't seem right. Any pointers?
Something like this should do it, but I haven't tested it:
import fnmatch
def set_userdata(self):
self.userdata = {}
conf = Snapper.read_config(self.base.conf)
pkgs = self.base.transaction.install_set | self.base.transaction.remove_set
for section in conf.sections():
for pkg in pkgs:
if fnmatch.filter(pkg.provides, section):
self.userdata = dict(conf.items(section, raw=True))
return
You'd then call this in pre_transaction and pass self.userdata to CreatePreSnapshot and later CreatePostSnapshot instead of the empy dict.
Alternatively fnmatch.fnmatch(pkg.name, section) to only match against the package name. I don't know how hawkey.Reldep stringifies or how to test any of this, or if it's slow to pattern match against all provides in a transaction. Better still, skip the second for loop and do fnmatch.filter((pkg.name for pkg in pkgs), section) if you're matching against package names.
To mimic the libzypp behavior you also need to make the snapshots conditional on the configuration matching.
import fnmatch
def __init__(self, base, cli):
self.base = base
self.description = " ".join(sys.argv)
self.userdata = {}
[...]
def pre_transaction(self):
if not len(self.base.transaction):
return
conf = Snapper.read_config(self.base.conf)
pkgs = self.base.transaction.install_set | self.base.transaction.remove_set
for section in conf.sections():
if fnmatch.filter((pkg.name for pkg in pkgs), section):
self.userdata = dict(conf.items(section, raw=True))
break
else:
return
[...]
...Actually filter will test all packages even if the first one matches, so back to nested loops:
for section in conf.sections():
for pkg in pkgs:
if fnmatch.fnmatch(pkg.name, section):
self.userdata = dict(conf.items(section, raw=True))
break
else:
continue
break
else:
return
break-else-continue-break is the break-out-of-nested-loop pattern. Optionally move it to a method for readability:
def set_userdata(self):
conf = Snapper.read_config(self.base.conf)
pkgs = self.base.transaction.install_set | self.base.transaction.remove_set
for section in conf.sections():
for pkg in pkgs:
if fnmatch.fnmatch(pkg.name, section):
self.userdata = dict(conf.items(section, raw=True))
return True
return False
def pre_transaction(self):
if not self.set_userdata():
return
[...]
We can skip the len(self.base.transaction) check because if the transaction is empty, so is pkgs. However, that means loading the config even for empty transactions; I don't know if that matters.
Hello, dnf team is doing a review of old and forgotten issues. Apologies for the enormous delay.
We would like to take a look at your contribution. Are you still interested in this fix? If so, note that:
- this fix is not on dnf team's priority list so, a maintainer outside dnf team will be needed
- this fix will be difficult to test as it will be mainly for SUSE
With that said, we are willing to provide help to accept contributions for this plugin.