amaranth
amaranth copied to clipboard
SSH remote builds assume sh based shell is the default
Remote builds currently execute as if connecting to a session over SSH will always give a relatively sh
-style shell. This is not an assumption that should be made and should likely be solved by allowing the path to such a shell to be specified and assuming one exists at /bin/sh
(this latter point is up for debate).
The current behavior that is most likely to trigger this is that it attempts to execute if [ -f ~/.profile ]; then . ~/.profile; fi && cd {} && sh {}.sh
. This could also likely be addressed by giving users control over the script used to launch into the copied build script, but since build scripts basically assume sh
compatible shells exist anyway, this is probably not that useful.
When I wrote this, I needed a way to get amaranth
env variables to be available to the ssh
session. The ~/.profile
dance was my attempt at a portable way to get those env vars into the session.
I'm not particularly attached to my solution. There might be a better way that doesn't force a user to e.g. specify a shell or other params to the remote build to get the env vars.
I agree there needs to be a way to do this, it is still unclear of the best way to do this. While SSH can directly export environmental variables, this needs to be specifically supported and I believe this isn't the case by default.
The likely best solution is to allow specifying an environment setup script of some sort directly, which can be transferred and executed to setup the environment and provide a standardized run environment to Amaranth's executor.
That can likely be used to mostly solve this problem itself regardless of potential API changes. The SSH executor could generate a script on the fly to do this, with the appropriate shebang line and then directly execute that file. I might draft a PR to do so if this is unclear.
The SSH executor could generate a script on the fly to do this, with the appropriate shebang line and then directly execute that file.
The .profile
dance allows to defer setting the env variables until we start running code on the remote host. Maybe a remote-build_*.{sh, fish, zsh}
script could be unconditionally generated as part of the input artifacts if the remote-build
feature is enabled? This would be analogous to how build_*.{sh, bat}
are both generated but only one is (typically) used per host.
if the
remote-build
feature is enabled
You can't detect that, as far as I know.
The SSH executor could generate a script on the fly to do this, with the appropriate shebang line and then directly execute that file.
The
.profile
dance allows to defer setting the env variables until we start running code on the remote host. Maybe aremote-build_*.{sh, fish, zsh}
script could be unconditionally generated as part of the input artifacts if theremote-build
feature is enabled? This would be analogous to howbuild_*.{sh, bat}
are both generated but only one is (typically) used per host.
I might be missing something, but I do not see how generating a setup script on the fly would not still provide this. It would be generated when executing a remote build (or even just transferring the files), and it could include a line to potentially source a user specified file.
The difference is that it would have it's own shebang line at the top to invoke the correct interpreter automatically rather than just assuming that SSH sessions are based around a specific shell. Executing a single command is possible to do directly via SSH, so we just use that instead.
This change should be implementable with no meaningful change in behavior over the current implementation. The script could even be uploaded only as part of executing a remote build (although it likely brings little benefit). What we should discuss here is if this approach is viable and if we want to also provide a new/changed API surface to better service this functionality.
I might be missing something, but I do not see how generating a setup script on the fly would not still provide this.
I misunderstood your previous comments in multiple ways (the "how I misunderstood" isn't that important). After your newest comment, I realized your solution works fine and is a better version of what I proposed.
The script could even be uploaded only as part of executing a remote build (although it likely brings little benefit).
I was under the impression that builds needed to be self-contained. I still think the setup script should still be part of the input artifacts that are sent, even if not executing the build.
I'm not sure why we need any of the above solutions. I propose the following, which ensures that the build script is always run with /bin/sh
:
diff --git a/amaranth/build/run.py b/amaranth/build/run.py
index 1e42417..72eb635 100644
--- a/amaranth/build/run.py
+++ b/amaranth/build/run.py
@@ -169,8 +169,9 @@ class BuildPlan:
channel = transport.open_session()
channel.set_combine_stderr(True)
- cmd = "if [ -f ~/.profile ]; then . ~/.profile; fi && cd {} && sh {}.sh".format(root, self.script)
- channel.exec_command(cmd)
+ cmd = (f"if [ -f ~/.profile ]; then . ~/.profile; fi && "
+ f"cd {root} && exec $0 {self.script}.sh")
+ channel.exec_command(f"sh -c '{cmd}'")
# Show the output from the server while products are built.
buf = channel.recv(1024)