devshell
devshell copied to clipboard
'goodbye' to complement 'startup' sequence
While refactoring and playing on https://github.com/numtide/devshell/pull/28 according to the new module structure, I was feeling a slight need for tearing down the custom certificate upon leaving the devshell (so in order to safely "clean up after myself").
Hence I thought, maybe a goodbye
sequence similar to https://github.com/numtide/devshell/pull/62 could be helpful.
On the other hand, this might be a dangerous feature (in the sense of a powerful door that can never be shut again).
Will check if I can make it work with an exit trap, then close, eventually.
Looks like asimple trap doesn't work:
{
# Execute this script to install the project's development certificate authority
install-mkcert-ca = pkgs.writeShellScriptBin "install-mkcert-ca" ''
set -euo pipefail
shopt -s nullglob
log() {
IFS=$'\n' loglines=($*)
for line in $loglines; do echo -e "[mkcert] $line" >&2; done
}
# Set the CA root files directory for mkcert via env variable
export CAROOT=${rootCADir}
# Install local CA into system, java and nss (includes Firefox) trust stores
log "To install the development CA into the system stores, you'll be asked root password:"
log $(sudo -K; ${pkgs.mkcert}/bin/mkcert -install 2>&1)
log "root CA directory: $(${pkgs.mkcert}/bin/mkcert -CAROOT 2>&1)"
uninstall() {
log $(${pkgs.mkcert}/bin/mkcert -uninstall 2>&1)
}
# Uninstall when leaving the devshell
trap uninstall EXIT
'';
}
➜ devshell git:(da-add-hostname-management) ✗ nix develop
warning: Git tree '/home/blaggacao/ghq/github.com/numtide/devshell' is dirty
[mkcert] To install the development CA into the system stores, you'll be asked root password:
[mkcert] The local CA is already installed in the system trust store! 👍
[mkcert] root CA directory: /nix/store/yd0kgpx9s2s1isprcm0xsvg2ym73j8g0-rootCA
Sudo password:
[mkcert] The local CA is now uninstalled from the system trust store(s)! 👋
🔨 Welcome to devshell
The main problem is that a user might open multiple shells on the same project.
This is an idea that I keep running into. A number of users have proposed to add a similar issue to direnv as well.
If there is a teardown script, it pre-supposes that some state has been changed on the machine that requires some cleanup. Or the other use-case is that processes have automatically been started and need to be shutdown on exit. Those are the use-cases that I have encountered.
When multiple shells are involved, it's not clear who is responsible for teardown. Shells can be exited and leave the machine in an unknown state. Or two projects or branches could change the same things on the system. There is also a non-negligible risk of leaving the system in a broken state.
For processes, I think the best approach is to start a process manager in one shell and then open one or more shells for development. That way the responsibility is clear.
For system changes, I think the best is to not do them at all. In the context of a company, it's probably ok because you can assume that people checkout a single monorepo (but it makes contractor's lives harder). In the context of open source projects, it's a really bad idea.
As I commented in https://github.com/numtide/devshell/issues/170#issuecomment-1613900188 I've created a background process to track my multiple startups
My .startup start a background process, that link /proc/$PPID/comm to .data/procs/$PPID directory, and the background process remove them if link is broken. Then background process cleanup and kill itself when .data/procs/ is empty Could be 'ps -q $(cat .data/pids)' or whatever the system has...
Maybe we could implement that with ServiceGroup (#253).
The only missing point of this 'garbage-collector' is direnv tell us that it unload the directory to deference count, instead of wait the terminal to be closed (pid invalidated), https://github.com/direnv/direnv/issues/129
And the buggy part is take the correct $PID/$PPID
The best would be to start with the workflows, and then see how to change devshell/direnv so that all of the use cases are fulfilled. Some that I have in mind:
- The user wants to quickly enter the project to run a command (eg: with
direnv exec
). Would the auto-starting of services slow this down? - The user kills the shell with
exit
/Ctrl-D. In that case direnv is never invoked. - The user runs
direnv reload
. Does this shutdown/restart the services? - The user switches git branches in the same repository, which requires a different set of services to run.
- Would the auto-starting of services slow this down?
No, I run it in background (thank you for teach-me how to do it with in a direnv issue)
# Start services in background
background() {
exec 0>&-
exec 1>&-
exec 2>&-
exec 3>&-
initSvcsd &
disown $!
}
background
2 Exit/Ctrl+D
The background services tracks caller pid, for my case pid is a link, created before the background service starts. We have two cases here:
- You stop direnv/shell after the link and before the background service start: next time when service start it will remove invalid pid
- You stop direnv/shell after the backgound service start: it will kill itself once pid of shell doesn't exist anymore
# Stop services when all registered procs died
while true
do
sleep 1
# delete dead procs link
find $PRJ_DATA_DIR/procs/ -xtype l -delete
# stop services if folder is empty
find $PRJ_DATA_DIR/procs/ -type d -empty -exec stopSvcs \;
done
Again the bug part is get the correct pid
# Register pids on startup
local SESSION_PID=$$
local PARENT_PID=$PPID
while grep -q direnv /proc/$PARENT_PID/comm
do
SESSION_PID=$(ps -o ppid= $PARENT_PID|tr -d \[:space:\])
PARENT_PID=$SESSION_PID
done
ln -s /proc/$SESSION_PID/comm $PRJ_SVCS_DIR/stopSvcsd/procs/$SESSION_PID &>/dev/null
- direnv reload, Does this shutdown/restart the services?
Not today, but if I have a notification from direnv, would restart, unless the reload would be fast enough, to remove pid and add again before the check.
To be honest not sure if it is a good or bad thing, since 4. Maybe an interval before stop should be configurable.
- requires a different set of services to run.
See 3.
Depends on how honcho tracks the services.
ie: I'm not using Honcho but S6, it requires a 'rescan' of services or start/stop,
That is why yesterday I created a inotify module that configures a service to run X command when Y files changes.