devpod
devpod copied to clipboard
Neovim TUI will be broken when used inside DevPod container
What happened?
When using Neovim as described in Quickstart Neovim page, Neovim will behave weirdly with broken scrolling with split buffers. I have created an issue to the neovim team: Issue. It seems that this issue is exclusive to DevPod due to the way it uses ProxyCommand when ssh-ing to the DevPod container.
Things that I and the people inside the neovim issue have tried:
- Set
TERMvariable inside the container toxtermoralacrittyorxterm-256colorand install the appropriate terminal info -> This does not solve the issue. - Try using
docker container exec -it containername bashdirectly -> This worked. - Try running
tmuxinside the container before using neovim -> Weirdly this worked?!
What did you expect to happen instead?
Neovim should behave normally inside devpod ssh just like when used directly inside container's shell.
How can we reproduce the bug? (as minimally and precisely as possible)
- Create a minimal
devcontainer.jsonthat contains an ubuntu container. - Install neovim inside the container, either directly or by using devcontainer features.
- Try using neovim and open multiple buffers. Scrolling on one buffer will break the other buffers.
Local Environment:
- DevPod Version: 0.5.18
- Operating System: Mac and Linux
- ARCH of the OS: AMD64 and ARM64
DevPod Provider:
- Local/remote provider: docker
Anything else we need to know?
Hey @AnhQuanTrl, thanks for reporting the issue! I can reproduce it on my side as well and will take a look at it 👍
Wow this issue was driving me mad, glad i found your report @AnhQuanTrl , i would like to share some workarounds i found these past few weeks:
As of now we know this bug occurs when we try to use a $TERM which has true color support, but there's some exceptions.
- Foot and wayland
If by any chance you are running wayland, and use the foot terminal, the neovim TUI is not SO broken, you will get some rendering issues, here and there, but its doable. You can ejoy true color support on neovim trough ssh, BUT you still need to lunch tmux session with the following config:
set-option -ga terminal-overrides ",foot:RGB"
[!NOTE] In my experience you don't actually need to run tmux inside the container, you can run it on your host machine and the effect will be the same.
- Headless neovim instance
Other work around which i don't really like is running a headless neovim instance in the container, you can achieve this trough adding a bash script in your container and configuring you
devcontainer.jsonlike this:
#!/bin/bash
# this sleep is required, if you want nvim to take into account your dotfiles as they get cloned after the postStartCommand
sleep 40
while true; do
nvim --headless --listen 0.0.0.0:6666
exit_code=$?
# If the exit code is non-zero, break the loop
if [ $exit_code -ne 0 ]; then
echo "SSH command exited with a non-zero status: $exit_code"
break
fi
sleep 1
done
{
"name": "DevPods",
"dockerComposeFile": ["../../docker-compose.yml"],
"service": "laravel",
"postStartCommand": "nohup bash -c \"/opt/scripts/entry-nvim.sh > /opt/scripts/entry-nvim.log 2>&1 &\""
}
finally you connect with the following command:
nvim --server localhost:6666 --remote-ui
[!NOTE] For this to work you also need to forward the
6666port.
In conclusion too complicated and not really worth it.
- Devcontainer cli The most "realiable" solution sigh, we can mimic some of the devpod behavior with the devcontainer cli, but you still need to manage the installation of your dotfiles, and for ssh you need a user with a password to access the container. But hey! at least no TUI rendering issues.
simple example of the devcontainer.json file
{
"name": "DevPods",
"dockerComposeFile": ["../../docker-compose.yml"],
"service": "laravel",
"workspaceFolder": "/var/www/marco-regulatorio",
"features": {
"ghcr.io/alanfzf/features/bat:1.0.2": {},
"ghcr.io/alanfzf/features/neovim:1.0.1": {},
"ghcr.io/alanfzf/features/zoxide:latest": {},
"ghcr.io/devcontainers-contrib/features/fd:latest": {},
"ghcr.io/devcontainers-contrib/features/fzf:latest": {},
"ghcr.io/devcontainers-contrib/features/ripgrep:latest": {},
"ghcr.io/devcontainers-contrib/features/starship:latest": {},
"ghcr.io/georgofenbeck/features/lazygit-linuxbinary:latest": {},
"ghcr.io/devcontainers/features/github-cli:latest": {},
"ghcr.io/devcontainers/features/sshd:1": {},
"ghcr.io/devcontainers/features/common-utils:2": {
"installZsh": true,
"configureZshAsDefaultShell": true
},
"ghcr.io/social-anthrax/eza-devcontainer/eza:latest": {}
}
}
As you can see we use the feature "ghcr.io/devcontainers/features/sshd:1": {}, to handle or mimic the devpod behavior.
Then you need to expose the 2222 port and use the following ssh config to connect:
Host remote
AddKeysToAgent yes
ForwardAgent yes
HostName localhost
User YourUserHere
Port 2222
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
GlobalKnownHostsFile /dev/null
As you can see this is also has a lot of overhead as well, but at least a little more doable, if any one finds another solution please post it :)!
On a final note, it would be really Cool, if this issue could be pinned to alert other neovim users, so they don't end up on the same rabbit hole i fell into hahaha.
Hi @bkneis, i see you mentioned that #1275 fixes this. Is it possible to give an overview of how it has been fixed? Just want to see if i can get it working locally, while waiting for the pull request to be merged
Hi, just kindly wondering if there is any resolution to this? I'm coming up against the same issue and am curious as to how #1275 fixes this, and if there is a method available to reproduce the configuration / setup. It also would be very handy to make this more prominent in user documentation as mentioned by @alanfzf to prevent people falling down this rabbit hole :pray:
@mfw78, i don't think it does. If i read through the changes on that pull request, i see this update to the documentation:
### NeoVim
Some issues such as setting $TERM have been noticed on NeoVim, a solution has been documented [here](https://github.com/loft-sh/devpod/issues/1187)```
IMHO then, it seems that this issue should be re-opened? :pray:
Agreed.. Hi @pascalbreuninger, would you mind taking a look?
Hi, commenting because it seems this issue is still closed at the moment, but this does not appear to be a one-off issue. Is there any traction on this topic yet? Right now our team is running what we can locally and using docker exec directly instead of ssh to work with the containers. But this is obviously a limited workaround and we're attempting to avoid some of the more involved suggestions as they require a decent amount of manual intervention when working with a distributed team.
Also happening for me with alacritty on MacOs. My workaround so far is docker exec but the ergonomics are not great. As mentioned in https://github.com/LazyVim/LazyVim/issues/3896 this might be an issue with faulty termcaps. When using OrbStack the issue does not appear. OrbStack being similar in a way that it is a CLI that abstracts SSH to shell into a container.
Orbstack uses an ssh config like this:
# AUTO-GENERATED BY ORBSTACK. DO NOT EDIT.
# To make changes, add or override hosts at the top of ~/.ssh/config
Host orb
Hostname 127.0.0.1
Port 32222
# SSH user syntax:
# <container>@orb to connect to <container> as the default user (matching your macOS user)
# <user>@<container>@orb to connect to <container> as <user>
# Examples:
# ubuntu@orb: container "ubuntu", user matching your macOS user
# root@fedora@orb: container "fedora", user "root"
User default
# replace or symlink ~/.orbstack/ssh/id_ed25519 file to change the key
IdentityFile ~/.orbstack/ssh/id_ed25519
# only use this key
IdentitiesOnly yes
ProxyCommand '/Applications/OrbStack.app/Contents/Frameworks/OrbStack Helper.app/Contents/MacOS/OrbStack Helper' ssh-proxy-fdpass 501
ProxyUseFdpass yes
While this is what devpod generates afaik:
Host bento.devpod
ForwardAgent yes
LogLevel error
StrictHostKeyChecking no
UserKnownHostsFile /dev/null
HostKeyAlgorithms rsa-sha2-256,rsa-sha2-512,ssh-rsa
ProxyCommand "/opt/homebrew/bin/devpod" ssh --stdio --context default --user vscode bento
User vscode
Maybe this can hint someone with a deeper understanding of ssh and termcap to the right direction
Please help re-open this issue as I think it is still not resolved. While docker exec is a work around, it kinda defeat the purpose of devpod as an abstraction layer.
If I may, I'd like to ask the people here who are currently experiencing the same limitations how they're overcoming certain things with using the "docker exec" approach as I am while the team is potentially on hiatus for the holiday season etc.
The biggest issue we are experiencing with the workarounds, as my entire team uses neovim via ssh, is that credential forwarding and port forwarding become a bit more of a headache without the devpod helper processes starting up in the background once an ssh connection is established. Has anyone found a reproducible lightweight way of automating the startup of those processes without the devpod ssh entry point beyond trying to run background processes with additional bind mounts manually?
I'd like to try to plan a feasible workaround, as some of our team are already suggesting a development migration to Coder in light of some of the ongoing challenges (I have nothing against that platform; I simply like how lightweight the Devpod approach is by comparison).
I'd also like to point out that while the best people for the job are the few already familiar with the codebase in this particular area as maybe the rest of us having this issue are not (I could be projecting here, but my Go skills are embarrassingly out of date, circa 2013-ish), if bandwidth is a problem here, and if some quick turn-around guidance as needed is available, I'd be willing to open a merge request to contribute.
Hey everyone, reopened the issue.
Last time I've checked this both VI and VIM worked just fine under devpod ssh which is why I thought it might be a combination of neovim and the TERM env var we've hardcoded prior to #1276.
I've just retested the issue and didn't experience any fragmentation in rendering when using multiple buffers in a split view.
For reference, I tested it on
- alacritty 0.11.0
- built-in terminal on macOS 2.14
with
- devpod v0.6.6
- neovim v0.10.3
Could you guys provide your version to validate against them please?
Can confirm broken scrolling in :vsplit
nvim 0.10.2 devpod 0.6.7 built-in terminal on macOs 15.2
Reproducer:
- devpod ssh ...
nvim -u /dev/null -O <two files long enough to scroll>(launch two files in vsplit w/o custom config)- scroll down in the right vsplit beyond the end of the visible area (scrolling in the left split does not trigger the bug for me)
An example of broken scrolling in a floating window in lazyvim
nvim 0.10.1 devpod 0.6.8 gnome-terminal
I'm seeing the following when i run :checkhealth in nvim. Don't know if it is at all relevant?
Configuration
- ERROR Locale does not support UTF-8. Unicode characters may not display correctly.
$LANG=en_ZA.UTF-8 $LC_ALL=nil $LC_CTYPE=nil
- ADVICE:
- If using tmux, try the -u option.
- Ensure that your terminal/shell/tmux/etc inherits the environment, or set $LANG explicitly.
- Configure your system locale.
This issue is stale because it has been open for 60 days with no activity.
Any work on this issue? I started using devpod recently and I always get this issue when opening neo-tree, mason or doing Lazy sync
Any updates? I'm getting issues while using avante ai (retrieving an answer), and sometimes with Neo-Tree as well, it's visible, it's shattering the left panel though.
Nvim version: 0.10.4
To those of you who have a workaround using docker exec can you please explain how this solution works? I have tried this with the appropriate user from devpod but I cannot access any of the installed features and the config from my dotfiles do not persist.
This issue is stale because it has been open for 60 days with no activity.
Bump
I'm also experiencing these issues, here's my setup:
- Alacritty 0.14.0-dev (c899208e) on host, with the terminfo manually installed.
- Neovim v0.11.2 in the container
- Devpod v0.6.15
Like for the rest using docker exec renders things normally.
Also experiencing this issue. I feel a little better that I am not the only one.
Same as others ssh docker exec -it server bash I get a silky smooth experiance similar to running on my host.
PoA for me is to see if I can reproduce this when running on a linux machine without using devpod to see if this is an ssh issue or a devpod proxy command issue or has anyone already figured this out?
It's really not just neovim, it's just more noticeable. Try scrolling inside a tmux. Some other TUI programs are also behaving the same way.
There's Something in the way the agent works that just breaks terminal.
Also, bump.
I might have a fix!!!!
Check these settings in Docker Desktop (for me on my mac anyway) and it's reduced down to almost nothing 🚀🚀🚀🚀🚀🚀🚀
@Rich107 I can only find references to Docker Desktop with that option. What does that option do and how do I enable that when I run Docker on the CLI?
@Jacob-Flasheye @Rich107 I really don't think it has anything to do with the underlying virtualization framework.
The behavior is consistent with AWS provisioned machines, local docker, remote docker, podman, etc.
I firmly believe it's the way the devpod agent handles traffic. There's something in there that TUI apps don't like. Case in point, using docker exec, either with a local or remote docker profile to get a shell on the devpod env, doesn't artifact and everything is 100% usable.
docker exec is the only workaround that I found that is 100% reliable and 100% artifact free, otherwise you end up in crazy rabbit holes about terminals, neovim, channel compression etc. and in the end, nothing makes a significant difference.
Maybe this issue is related to SSH's ProxyCommand? It pipes multiple SSH connections and it's the only main difference I can see between the working solutions of just using docker exec or building your own devcontainer and directly SSH'ing into that without the intermediary *.devpod host.
Small update! it seems the issue no longer occurs on my end, im using nixos + alpine image tough, so it may be some package that causes this issue on debian based images.
My Setup is:
- NixOS WSL + home-manager
let me share the repo and dotfiles that i used:
basically what you do is:
- clone the repo
- execute
devpod up . - ssh into the container
- git clone https://github.com/alanfzf/.dotfiles
- execute
nix shell nixpkgs#home-manager - cd into ~/.dotfiles/config/nixos/
- execute
home-manager switch --flake .it should catch the dev user specified in the container. - logout of the ssh session and log back in.
here are some videos of it running with no problems.
if anyone needs further assistance let me know.
https://github.com/user-attachments/assets/57190adb-1db0-4b10-9d52-1a5e9af3a908
@alanfzf It would be good to know which version of devpod you're using to compare.