k8up icon indicating copy to clipboard operation
k8up copied to clipboard

Deduplicate Check and Prunes for the same backup repository

Open ccremer opened this issue 3 years ago • 12 comments

Summary

As K8up user
I want to deduplicate Jobs that target the same repository
So that exclusive Jobs are not run excessively

Context

Check and Prunes are Restic Jobs that need exclusive access to the backend repository: Only one job can effectively run at the same time. However, multiple backups can target the same Restic repository.

The operator should deduplicate prune jobs that are managed by a smart schedule. So for example if there are multiple schedules with @daily-random prunes to the same S3 endpoint the scheduler should only register one of these.

But if the prunes have explicit cron patterns like 5 4 * * * and 5 5 * * * they should NOT be deduplicated. This will ensure maximum flexibility if for some reason a user explicitly wants multiple prune runs.

Out of Scope

Further links

  • #116

Acceptance criteria

  • Given 2 Schedules with either Check or Prune job, when the same randomized predefined cron syntax is specified that targets the same backup repository, then ignore the duplicated schedule of the same job type that has also the same schedule and backend.
  • Given 2 Schedules with jobs that are already deduplicated, when changing the cron schedule of one of the jobs, then remove the deduplication and schedule both jobs separately.
  • Given 2 Schedules with jobs that are already deduplicated, when changing the backend of a Schedule, then remove the deduplication and schedule jobs separately.

Implementation Ideas

  • Schedules are added to the cron library at the time when Schedules get reconciled. At this time we could do the deduplication logic so that the duplication job does not get added

ccremer avatar Dec 18 '20 17:12 ccremer

Some ideas to spin:

  • Deduplicate when scheduling, i.e. before adding a new entry to the internal cron
  • Deduplicate when creating job objects,
    • by checking an internal reference
    • by querying k8s controller if a similar object was already scheduled
    • combination of them.

cimnine avatar Jan 21 '21 13:01 cimnine

This ain't exactly easy. I started implementing, but soon discovered that

  • we need to check for deduplication first in all other Schedule CRs, as only those contain the @... schedule definitions.
  • also with other schedules, we check if the backend is "same" -> equal comparison implementations necessary
  • then, we don't know whether the checks or prunes of the other schedules have already been deduplicated or not, thus we also need to verify if they're not already in the cron scheduler.

So, back to drawing board: image

The graphic is in the PR for editing, though it's not intended to stay there in that form.

ccremer avatar Jan 22 '21 15:01 ccremer

One thought I just had:

If we do this, should we provide a stable deduplication for this? Example:

A newscheduleB is not registered because scheduleA is already registered. Both contain a prune with @weekly-random. SchedulA's random prune was just triggered for this week. We restart the operator and now suddenly scheduleB is registered but scheduleA not, because it was handled first. ScheduleB's @weekly-random would already trigger the next day. Now we have a schedule that didn't run in the specified interval.

To guarantee the same interval regardless of operator restarts it would need to a way to know which schedule it should prefer.

Kidswiss avatar Jan 22 '21 15:01 Kidswiss

To guarantee the same interval regardless of operator restarts it would need to a way to know which schedule it should prefer.

Could we sort them by date of creation?

It could work something like this:

  1. Read 1st Schedule CR that arrives (i.e. that is reconciled)
    • Just schedule it in cron.
    • Add it to a new internal data structure; remember the fields on which we like to deduplicate, plus creation date for sorting.
  2. Read 2nd Schedule CR that arrives
    • Check the internal data structure whether we've seen an equal schedule before
      • If yes: Check if it's older than the other(s)
        • If yes: Remove the previously oldest schedule from cron, then add this one.
        • If no: Ignore.
      • If no: Schedule it.
    • In any case: Add it to the internal data structure as well.
  3. Repeat for every next Schedule CR.

Now, if a Schedule is deleted:

  • Check the internal data structure if it is the oldest of the available Schedules.
    • If yes:
      • Remove from cron.
      • Remove it from the internal data structure.
      • Schedule the now oldest Schedule CR.
    • If no:
      • Just remove from internal data structure.

cimnine avatar Jan 22 '21 15:01 cimnine

I had another idea over the weekend:

We could hash the repository string and the type and use that as the randomness seed (https://golang.org/pkg/math/rand/#Seed). So each type and repo combination will generate the same "random" time. This way we only have to track if at least one of the jobs is registered for a given type/repo combo.

By hashing the values before using as the seed it should generate enough spread for the schedules.

Kidswiss avatar Jan 25 '21 07:01 Kidswiss

We could hash the repository string and the type and use that as the randomness seed (https://golang.org/pkg/math/rand/#Seed). So each type and repo combination will generate the same "random" time. This way we only have to track if at least one of the jobs is registered for a given type/repo combo.

It sounds like rand should not be used then, but rather a number should be deduced directly from the hash. For one, because there's the underlying assumption that the implementation of rand does not change and stays stable. It's also not obvious to rely on rand to produce a predictable number ;)

My main concern with this solution: It's anything but obvious to understand. I.e., it's a very implicit solution and in my experience with implicit solutions is that they are hard to understand right away for the next developer.

cimnine avatar Jan 25 '21 08:01 cimnine

Sure we can use something else to generate the times, rand was just a suggestion.

But I feel like we'd have to get the randomness for the same types and repos down. Your suggestion could still lead to garbled execution times if there are a lot of namespace changes on a cluster.

Kidswiss avatar Jan 25 '21 08:01 Kidswiss

Unpopular opinion: The more we try to solve these "stable across restarts" problem, the more I'm convinced we should get rid of any internal states altogether. e.g. replace the cron library with K8s CronJobs etc. This is already the 2nd or 3rd time we try to solve this problem with special mechanisms :see_no_evil: Such an attempt ofc would not much simplify the deduplication logic, but if we can find a "stateless" algorithm, we can leave state information to Kubernetes API/etcd and not need to worry about restarts.

In a private project/operator I'm exactly at the same problem: handling scheduling and restarts. I have found a working solution, we can discuss it if you're further interested.

At the moment I'm a bit hesitant to come up with complicated "solutions" that solve deduplication across restarts when using internal state. Maybe we should limit the deduplication feature to @daily-random only, so that a missed schedule due to K8up restart isn't the end of the world.

ccremer avatar Jan 25 '21 10:01 ccremer

If we implement the deduplication logic fo @daily-random it can also be used for the others, or not? I mean, the effort to implement it for one would probably be the same as implementing it for all.

I agree that switching to k8s native cron-jobs could help with things, but they may make other things more complicated.

I also agree that off-loading as much state as possible to k8s should be desired, but there are cases where I think having a small in-memory state could make sense. For example to reduce the amount of API queries.

I'm interested in hearing your solution for that issue

Kidswiss avatar Jan 25 '21 11:01 Kidswiss

For example to reduce the amount of API queries.

With the switch to Operator SDK resp. controller-runtime, the client has built-in read-cache by default. Each GETted object lands in the cache and is automatically watched for changes. repeated GETs for already retrieved objects don't even land at the API server anymore. It's actually harder to ignore the cache for certain object Kinds for whatever reason.

So, as far as performance goes, I think it's worse when we try to maintain our own barely tested cache ;)

If we implement the deduplication logic fo @daily-random it can also be used for the others, or not? I mean, the effort to implement it for one would probably be the same as implementing it for all.

It depends whether we also implement for stable deduplication across restarts. If we decide to do it stable, we are accepting added complexity and reduced maintainability, whereas with ephemeral deduplication we can simplify deduplication at the cost of missing schedules as you described.

ccremer avatar Jan 25 '21 11:01 ccremer

My personal opinion is that missing schedules should be something that the k8up operator should avoid as much as possible. Nobody wants a backup solution that may or may not trigger a job.

Kidswiss avatar Jan 25 '21 13:01 Kidswiss

Thanks for the good internal discussions :+1:

Here is the new proposal how it could work: image

  • We add a new CRD, EffectiveSchedule (better name welcome) that removes the effectiveSchedule status fields from the Schedule CR. This new CR, stored in the same namespace as the operator is running, will be a persistent link that holds the information in order to deduplicate Check and Prune.
  • This CR will be created when a Schedule is reconciled with a @-random spec. If it finds an EffectiveSchedule object that already has a back-reference to itself via OwnerReference, then it does nothing. Otherwise, it will create a new EffectiveSchedule with a randomized schedule and added to internal Cron scheduler.
  • When another Schedule gets reconciled that has the same schedule and same backend(s), then the EffectiveSchedule will get another OwnerReference to the new Schedule, but it won't be added to the internal Cron scheduler. That way, the duplicate schedule is deduplicated. If the schedule that has go the prune and check jobs assigned, is deleted, then the next Schedule is elected to "master" the Check and Prune. If no Schedule is in the list anymore, then Kubernetes automatically GCs the EffectiveSchedule.
  • The Idea of this new CRD is to have a intermediary step before we can go plain Kubernetes CronJobs in K8up v2. It is not meant to be a resource maintained from K8up end users, but purely from the Operator. Thus it's regarded as an implementation change. It may be removed in K8up v2 or later if the relationships can be computed at runtime.
  • The new CRD should go into K8up v1.0, but the deduplication feature goes into v1.1

ccremer avatar Jan 25 '21 16:01 ccremer