spin
spin copied to clipboard
oci loader related problems with spinkube
During oci_loader, we copy the files to an assets directory.
This works fine with spin up but with SpinKube, this behavior has triggered a few issues such as https://github.com/spinkube/containerd-shim-spin/issues/40, https://github.com/spinkube/containerd-shim-spin/issues/108 and https://github.com/spinkube/containerd-shim-spin/issues/123
I talked briefly to @lann and it seems this was done to support the --allow-transient flag in spin up.
is there any workaround that we can apply for this? (or maybe copy files over only if --allow-transient flag is set)
cc @vdice @radu-matei
it seems this was done to support the --allow-transient flag in spin up
Not exactly. The files are copied to create the directory structure to be mounted into the guest app. OCI layers start out as just content hashes with no paths; the loader correlates the hashes and paths to reconstruct the correct file tree.
There are alternatives to copying files such as using hard links. Hard links come with their own complexity and caveats, one of which is that they wouldn't work with --allow-transient.
The copying is needed because we use preopened_dir to map a host directory to the guest root directory: hence that host directory needs to be constructed. I vaguely recall hearing that we could virtualise the file system, which would allow us (during load) to map guest file names individually to content-addressed files in the cache. (At least in cases where transient writes are not in play.) But I don't understand the implications or costs of what well enough to have a feeling for the practicalities - there's no simple preopened_file API like there is for preopened_dir, so I imagine it would involve a lot of implementation of the WASI filesystem interface. Do wiser heads have any sense for the cost and pain?
any sense for the cost and pain?
Relatively high. We'd essentially need to fork/recreate a significant chunk of wasmtime-wasi.
@rajatjindal - reviewing this during issue triage, are you still experiencing this issue or have we found a workaround?
hi @macolso, I have not spent a lot of time in this area lately. and given Lann's evaluation of implementation cost, I think we can close this. thank you.