flintlock icon indicating copy to clipboard operation
flintlock copied to clipboard

Dir clean up on delete

Open Callisto13 opened this issue 3 years ago • 9 comments

I don;t know whether this was intentional, but while before deletes cleared the dir structure back up to the namespace, now we leave the name dir as well.

Close if this is meant to be (it does look weird with dead capmvm names lying around)

Before delete:

/var/lib/flintlock/vm/<namespace>/<name>/<uid>

After delete:

/var/lib/flintlock/vm/<namespace>/<name>

Ideally we would remove back to

/var/lib/flintlock/vm/<namespace>

if all identically names mvms in that namespace are removed, and

/var/lib/flintlock/vm

if all mvms in that namespace are removed

Callisto13 avatar Jan 26 '22 16:01 Callisto13

Closing, this will be part of #90

yitsushi avatar Feb 03 '22 13:02 yitsushi

I do have a lot of old directories lying around on my Liquid Metal host. I need to clean these up manually like this : $ find /var/lib/flintlock/vm/default/ -type d -empty -delete

It would be nice if that were not the case.

LutzLange avatar Jul 20 '22 14:07 LutzLange

Large scale, it can be problematic without a locking mechanism. For example we find /var/lib/flintlock/vm/<namespace>/ is empty, we try to delete it,but at the same time another VM is being reconciled and would like to use that directory, but their ShouldDo function said we don't have to create that directory.

Or we can live with that. I think the simplest solution if we care about FS garbage (I assume we would be happier if we clean up containerd as well), we can add a simply check in deleteDirectory -> Do(ctx) to walk back back until hits flintlock's vm root directory. This approach is ugly and right now steps have no concept of application or application configuration, they do their only job without extra context / knowledge of other steps. And that's a good thing.

We can't add all parent directory to delete and it will not delete the directory if it's not empty because the pre-check ShouldDo will tell the system "hey there are VMs in that directory", and if we mess up one check it can delete other VMs accidentally (as it uses RemoveAll).

And that leads back to my original proposal to write a background goroutine or cron job that does a clean-up periodically.

yitsushi avatar Jul 20 '22 15:07 yitsushi

👍 thanks for recapping @yitsushi, i knew there was "stuff" we had to consider for this 😁

Callisto13 avatar Jul 20 '22 15:07 Callisto13

I think a background routine which periodically locks the filesystem to creates, recursively removes anything empty, then unlocks would be fine for now. The remove operations won't be massively slow, so any parallel incoming calls to create should be fine waiting for that second.

their ShouldDo function said we don't have to create that directory.

that check is weird tbh. doesn't cost anything to create over an existing one

Callisto13 avatar Jul 20 '22 15:07 Callisto13

I'm creating an deleting clusters over and over again in my demo environment, and I need to clean up from time to time :

$ tree -d 1 /var/lib/flintlock/vm/default/ 1 [error opening dir] /var/lib/flintlock/vm/default/ ├── dev15-control-plane-b7fzn ├── dev16-control-plane-m8zhd ├── dev20-control-plane-m76bn ├── dev21-control-plane-plsdk ├── dev25-md-0-l4k8d ├── dev25-md-0-vgzrp ├── dev26-md-0-855rp │   └── 01G7EMFEMWWXGZTWJ6WJ7WVN46 └── dev26-md-0-k8mwl └── 01G7EMFEGAXAAY4E03K7TJDSH6

LutzLange avatar Jul 21 '22 07:07 LutzLange

This issue is stale because it has been open 60 days with no activity.

github-actions[bot] avatar May 19 '23 07:05 github-actions[bot]

Still valid.

yitsushi avatar May 19 '23 11:05 yitsushi

This issue was closed because it has been stalled for 365 days with no activity.

github-actions[bot] avatar May 19 '24 07:05 github-actions[bot]