docker-webtop icon indicating copy to clipboard operation
docker-webtop copied to clipboard

[FEAT] Native LXC compatibility

Open wouldntyouknow opened this issue 1 month ago • 5 comments

Is this a new feature request?

  • [x] I have searched the existing issues

Wanted change

Possibility to run the image in an LXC directly, instead of Docker.

Reason for change

Proxmox v9.1 now supports OCI Registry images as templates for LXC containers. This is a great feature which can quickly gain popularity. With Webtop being one of my favorites, I just had to test it out. And it works great, out of the box, by creating a new CT with this pulled template! Just had to create /dev/shm so Chromium could work. I have passed through the GPU, which is visible, the below is the proper output for a GPU enabled LXC container:

abc@Test:/$ ls -la /dev/dri
total 0
drwxr-xr-x 2 root root         80 Nov 19 20:36 .
drwxr-xr-x 7 root root        440 Nov 19 20:49 ..
crw-rw---- 1 root video  226,   0 Nov 19 20:36 card0
crw-rw---- 1 root render 226, 128 Nov 19 20:36 renderD128

However, it does not seem to be utilized, vainfo output:

abc@Test:/$ vainfo
Trying display: wayland
Trying display: x11
error: can't connect to X server!
Trying display: drm
error: failed to initialize display

Any recommendations on what needs to be done? Intuitively it seems like the solution should be rather easy, but of course I can be wrong. Also, in such a case, where would the env vars be passed to inside the container, furthermore, where is the log? Wonder if editing manually the files where the docker env vars end up would actually solve this.

Proposed code change

Don't think any code change is necessary, rather a question of documentation.

wouldntyouknow avatar Nov 19 '25 21:11 wouldntyouknow

Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid.

github-actions[bot] avatar Nov 19 '25 21:11 github-actions[bot]

Any recommendations on what needs to be done? Intuitively it seems like the solution should be rather easy, but of course I can be wrong. Also, in such a case, where would the env vars be passed to inside the container, furthermore, where is the log? Wonder if editing manually the files where the docker env vars end up would actually solve this.

These are all questions for proxmox or whatever upstream project provides that functionality (incus?).

aptalca avatar Nov 19 '25 23:11 aptalca

I feel like proxmox needs to admit at some point host level docker containers are needed and to to add it as an option in addition to lxc. In the age of AI it is a one week sprint with a single competent dev. The api is well documented.

In the end they picked the wrong horse I feel like, people making containers do not test in lxc environments so no matter what they do it will always be half baked.

thelamer avatar Nov 19 '25 23:11 thelamer

Any recommendations on what needs to be done? Intuitively it seems like the solution should be rather easy, but of course I can be wrong. Also, in such a case, where would the env vars be passed to inside the container, furthermore, where is the log? Wonder if editing manually the files where the docker env vars end up would actually solve this.

These are all questions for proxmox or whatever upstream project provides that functionality (incus?).

So, I realized I was operating under the wrong assumptions and now it is clear (I was thinking I need to manually edit files within the container, hence the original question) - thank you. Turns out the LXC conf file holds the env vars, some are even auto-filled from the OCI information during CT creation. Example:

lxc.environment.runtime: DISABLE_ZINK=false
lxc.environment.runtime: DISABLE_DRI3=false
lxc.environment.runtime: TITLE=Webtop
lxc.environment.runtime: PASSWORD=whatever

GPU access was also sorted with adding necessary users to video/render groups - VAAPI is now being utilized if chosen. I was surprised how much better webtop ran like this, as opposed to Docker being a middle-man (though a reason of course might been my configuration).

I feel like proxmox needs to admit at some point host level docker containers are needed and to to add it as an option in addition to lxc.

Yeah, this will probably never happen with Proxmox, not on the hypervisor level - officially discouraged...

In all, I am aware nobody wants to deal with this (being out of scope) and I respect that. Though, pretty sure some people will flock here with the same questions. Feel free to rename this thread and leave it (if deemed necessary), or just close (if not).

wouldntyouknow avatar Nov 20 '25 05:11 wouldntyouknow

Any recommendations on what needs to be done? Intuitively it seems like the solution should be rather easy, but of course I can be wrong. Also, in such a case, where would the env vars be passed to inside the container, furthermore, where is the log? Wonder if editing manually the files where the docker env vars end up would actually solve this.

These are all questions for proxmox or whatever upstream project provides that functionality (incus?).

So, I realized I was operating under the wrong assumptions and now it is clear (I was thinking I need to manually edit files within the container, hence the original question) - thank you. Turns out the LXC conf file holds the env vars, some are even auto-filled from the OCI information during CT creation. Example:

lxc.environment.runtime: DISABLE_ZINK=false
lxc.environment.runtime: DISABLE_DRI3=false
lxc.environment.runtime: TITLE=Webtop
lxc.environment.runtime: PASSWORD=whatever

GPU access was also sorted with adding necessary users to video/render groups - VAAPI is now being utilized if chosen. I was surprised how much better webtop ran like this, as opposed to Docker being a middle-man (though a reason of course might been my configuration).

I feel like proxmox needs to admit at some point host level docker containers are needed and to to add it as an option in addition to lxc.

Yeah, this will probably never happen with Proxmox, not on the hypervisor level - officially discouraged...

In all, I am aware nobody wants to deal with this (being out of scope) and I respect that. Though, pretty sure some people will flock here with the same questions. Feel free to rename this thread and leave it (if deemed necessary), or just close (if not).

Sorry for not being specific, proxmox devs will probably laugh at how far off this is but I just fed portions of their codebase to Gemini:

This document outlines the technical architecture for integrating Docker as a native runtime within Proxmox VE. This approach bypasses the overhead of "Docker-inside-LXC" to deliver bare-metal performance, direct Layer 2 networking, and full integration with the Proxmox Firewall and ZFS storage stack.


1. Architecture: The Modular Driver Model

We are decoupling the container management logic (pve-container) from specific runtime implementations. The existing monolithic LXC logic moves to a driver backend, allowing PVE to switch execution engines based on configuration.

  • Implementation: A new PVE::Container::Driver interface standardizes lifecycle commands (start, stop, migrate).
  • Configuration: The ostype parameter determines the runtime backend.

Config Example (/etc/pve/lxc/100.conf):

# Docker Container Config
ostype: docker
image: docker.io/library/nginx:latest
net0: bridge=vmbr0,hwaddr=BC:24:11:...,ip=192.168.1.50/24,gw=192.168.1.1
mp0: local-zfs:vm-100-disk-0,mp=/usr/share/nginx/html

2. Networking: Namespace Injection via nsenter

We reject Docker's native NAT/Bridge networking to ensure full compatibility with the Proxmox Firewall. Instead, we manually plumb the network.

  • Mechanism: PVE creates the interface on the host, attaches it to the firewall-managed bridge, and then "pushes" it into the running container's namespace.
  • Benefit: The container appears as a standard device on the bridge (Layer 2), bypassing Docker's iptables chains entirely.

Technical Implementation (Perl/Syscall Pseudocode):

# 1. Start Docker with no network
# docker run --network none ...

# 2. Create veth pair on HOST
run_command("ip link add veth100i0 type veth peer name veth100p0");

# 3. Attach Host-side to Bridge (Enables PVE Firewall)
PVE::Network::tap_plug("veth100i0", "vmbr0", ...);

# 4. Inject Peer-side into Container Namespace
my $pid = get_docker_pid($vmid);
run_command("ip link set veth100p0 netns $pid");

# 5. Configure Guest-side via nsenter (No tools required inside container)
# We use the host's 'ip' binary to manipulate the guest's namespace
run_command("nsenter -t $pid -n ip link set veth100p0 name eth0");
run_command("nsenter -t $pid -n ip addr add 192.168.1.50/24 dev eth0");
run_command("nsenter -t $pid -n ip link set eth0 up");

3. Startup Reliability: The "Smart Entrypoint" Shim

Because Docker containers start instantly, the application (e.g., Nginx) might crash before PVE finishes plumbing the network interface in Step 2. We solve this race condition with a transparent startup shim.

  • Mechanism: PVE mounts a read-only helper script into the container and overrides the ENTRYPOINT.
  • Logic: The shim blocks execution until the network interface appears, then hands off control to the original application.

The Shim Script (/usr/share/pve-docker/netwait.sh):

#!/bin/sh
# Loop until PVE injects the interface
while [ ! -e "/sys/class/net/eth0" ]; do
    sleep 0.1
done

# Replace self with original application (Preserves PID 1)
exec "$@"

Runtime Injection:

# PVE constructs the final run command automatically:
docker run \
  -v /usr/share/pve-docker/netwait.sh:/pve-netwait:ro \
  --entrypoint /pve-netwait \
  nginx:latest \
  /docker-entrypoint.sh nginx -g 'daemon off;'

4. Storage: Hybrid Persistence

We utilize a hybrid model to combine the speed of Docker overlays with the safety of Enterprise Storage.

  • Immutable Layer (Registry): Docker images are pulled from registries. These are ephemeral and replaceable.
  • Mutable Layer (PVE Storage): Persistent data resides on PVE-managed volumes (ZFS/LVM/Ceph). These are mapped into the container as Bind Mounts.

Mapping Logic:

PVE Concept Docker Concept Implementation Detail
Template Image docker pull <image>
RootFS OverlayFS Managed by Docker Daemon
Mountpoint Bind Mount -v /dev/zvol/rpool/data:/app/data

Enterprise Benefit: Because the data resides on a ZFS dataset (not a hidden Docker volume), you can use standard PVE features like Replication (zfs send/recv), Snapshots, and Proxmox Backup Server on the persistent data volumes.

Even if this concept was an optional thing and labeled insecure it adds a real docker implementation, do the same with k8 etc, just expand support for more platforms and bake more and more functionality into your stack. This seems to be a common gripe with self hosted folks and most of them just have dedicated VMs to run docker containers now which while functional is not optimal from a resource perspective.

I made a docker network plugin a while back revolving around this concept to add VPN support to docker networks, you can nsenter and use all the host tools to configure up the network however you need to, then you split storage like mentioned above, if nodes need remote image layers they pull them on init and if they need volume data it is the same zfs replication for those folders.

What would make more sense from our side is just making it more a linux package, if you have systemd and the ability to run real Xorg vs Xvfb and my hacked together patches for DRI3/Zink support you get a huge leg up in performance from a vdi standpoint. But given our team size and it being basically only me contributing to the core tech stack and downstream containers outside of maintenance tasks it would be a large support burden to let people break out of the confines of a known docker container.

thelamer avatar Nov 20 '25 15:11 thelamer