flintlock
flintlock copied to clipboard
Dir clean up on delete
I don;t know whether this was intentional, but while before deletes cleared the dir structure back up to the namespace, now we leave the name dir as well.
Close if this is meant to be (it does look weird with dead capmvm names lying around)
Before delete:
/var/lib/flintlock/vm/<namespace>/<name>/<uid>
After delete:
/var/lib/flintlock/vm/<namespace>/<name>
Ideally we would remove back to
/var/lib/flintlock/vm/<namespace>
if all identically names mvms in that namespace are removed, and
/var/lib/flintlock/vm
if all mvms in that namespace are removed
Closing, this will be part of #90
I do have a lot of old directories lying around on my Liquid Metal host. I need to clean these up manually like this : $ find /var/lib/flintlock/vm/default/ -type d -empty -delete
It would be nice if that were not the case.
Large scale, it can be problematic without a locking mechanism.
For example we find /var/lib/flintlock/vm/<namespace>/
is empty, we try to
delete it,but at the same time another VM is being reconciled and would like to
use that directory, but their ShouldDo
function said we don't have to create
that directory.
Or we can live with that. I think the simplest solution if we care about FS
garbage (I assume we would be happier if we clean up containerd as well), we can
add a simply check in deleteDirectory -> Do(ctx)
to walk back back until hits
flintlock's vm root directory. This approach is ugly and right now steps have no
concept of application or application configuration, they do their only job
without extra context / knowledge of other steps. And that's a good thing.
We can't add all parent directory to delete and it will not delete the directory
if it's not empty because the pre-check ShouldDo
will tell the system "hey
there are VMs in that directory", and if we mess up one check it can delete
other VMs accidentally (as it uses RemoveAll
).
And that leads back to my original proposal to write a background goroutine or cron job that does a clean-up periodically.
👍 thanks for recapping @yitsushi, i knew there was "stuff" we had to consider for this 😁
I think a background routine which periodically locks the filesystem to creates, recursively removes anything empty, then unlocks would be fine for now. The remove operations won't be massively slow, so any parallel incoming calls to create should be fine waiting for that second.
their ShouldDo function said we don't have to create that directory.
that check is weird tbh. doesn't cost anything to create over an existing one
I'm creating an deleting clusters over and over again in my demo environment, and I need to clean up from time to time :
$ tree -d 1 /var/lib/flintlock/vm/default/ 1 [error opening dir] /var/lib/flintlock/vm/default/ ├── dev15-control-plane-b7fzn ├── dev16-control-plane-m8zhd ├── dev20-control-plane-m76bn ├── dev21-control-plane-plsdk ├── dev25-md-0-l4k8d ├── dev25-md-0-vgzrp ├── dev26-md-0-855rp │ └── 01G7EMFEMWWXGZTWJ6WJ7WVN46 └── dev26-md-0-k8mwl └── 01G7EMFEGAXAAY4E03K7TJDSH6
This issue is stale because it has been open 60 days with no activity.
Still valid.
This issue was closed because it has been stalled for 365 days with no activity.