reuse mounted AppImage?
It seems wasteful in both time and memory to mount readonly squashfs's over and over again, in the case when the same appimage is called from separate processes. What would be the best way to reuse existing mountpoints?
I've thought about logging the mountpoint and the AppImage checksum in AppRun, somewhere in /tmp, and reusing them if possible, but... it's a hack.
Having a fixed mountpoint (based on some hash) would also work, but IIRC there was another issue (due to non-relocatable apps) and you "didn't like it".
Current use-case: I'm building a calibre multi-binary AppImage (i.e. AppRun figures out what to call using $ARGV0) and, hence, multiple calibre's (calibre, ebook-viewer etc) can run at the same time.
What would be the best way to reuse existing mountpoints?
That's a very complex task. Right now, the processes are independent from each other. This greatly benefits the stability: the mount processes keep a file descriptor on the AppImage, and that keeps the AppImage "alive" on the filesystem. There's little to no risk of this system breaking in any way (the runtime runs very stable, thankfully).
What you suggest would require introducing a sort of "server process" that accepts connections from clients in some way, to manage the mount point centrally. It is likely possible in some way. However, it's counter intuitive to the user. They expect one execution to be independent from each other.
I've experimented in a related field with such a system (where I even had a system-wide component), and managing the state of an AppImage across more than one execution, with users potentially modifying them in the process (deleting is much less of an issue, moving, too), requires a lot of work and care.
I've thought about logging the mountpoint and the AppImage checksum in AppRun, somewhere in /tmp, and reusing them if possible, but... it's a hack. Having a fixed mountpoint (based on some hash) would also work, but IIRC there was another issue (due to non-relocatable apps) and you "didn't like it".
Generating a "predictable" mountpoint isn't that easy, yeah. As long as the AppImage isn't moved out of its place or renamed, though, it's reasonably easy: you can just hash the path. If it's moved, worst case, you have more than one mount process running (i.e., not any worse than what we have now, resource wise).
Current use-case: I'm building a calibre multi-binary AppImage (i.e. AppRun figures out what to call using $ARGV0) and, hence, multiple calibre's (calibre, ebook-viewer etc) can run at the same time.
Your AppImage doesn't have to call itself. Why can't you just call the binaries directly in your mounted AppDir/usr/bin? This'll only be problematic if the "main app" (i.e., Calibre) is closed, as it'd cause all other tools to be unmounted.
You could even solve the issue on the AppRun level. It shouldn't be any much harder to implement some "keep-alive entry point" that monitors subprocesses and only exits when no more child processes are alive than it is to implement some "FUSE mount server".
I wonder if simply adding
/opt/someapp.squashfs /opt/someapp
to fstab, as I used to do a long time ago, or reviving autofs, isn't actually easier.
I see this has been discussed in #419 (though you wouldn't necessarily be able to tell from the title)
What you suggest would require introducing a sort of "server process" that accepts connections from clients in some way, to manage the mount point centrally.
Couldn't something like a file lock be used to coordinate between processes? E.g. a process will update the lock when it starts and stops, and if it detects that it's the last process when holding the lock, it will also cleanup the mount.
Generating a "predictable" mountpoint isn't that easy, yeah. As long as the AppImage isn't moved out of its place or renamed, though, it's reasonably easy: you can just hash the path. If it's moved, worst case, you have more than one mount process running (i.e., not any worse than what we have now, resource wise).
Wouldn't it be more appropriate to bake in a unique identifier (e.g. hash of file system) at build time? The spec already supports other metadata like .upd_inf and .sha256_sig.