netavark
netavark copied to clipboard
Consider add some docs and images about the overall architecture
It would be great to see how does this project work and how overall architecture is designed! Maybe adding some arch.md or some diagrams in the ./docs folder would be nice to have. 🙏
Do you care to kick this off with a PR?
Do you care to kick this off with a PR?
Not making any promises but I can get to it if you show me what should I do. 🙏
@flouthoc Please Help @Dentrax
@Dentrax If you are interested in trying out netavark
with podman
I can help you with that. That should give you enough context how to play around with netavark.
But overall even though if you don't wanna try it out yourself then also you could help us with documenting some top level stuff.
-
Netavark is a cli tool which sets ups networking for containers especially podman but it can do it for other container managers as well if right config is specified.
-
I'd suggest if you wanna start documenting then netavark config is something which could be documented so other tools could also use it. Most of the standard config is here: https://github.com/containers/netavark/tree/main/src/test/config
-
You could also try documenting entry points for netavark https://github.com/containers/netavark/tree/main/src/commands so other tools could use it.
-
And reading tests should give you more idea: https://github.com/containers/netavark/tree/main/test
I'd suggest whenever you run netavark without podman do it on a VM
I'll also tag others who could help @mheon @Luap99 @baude
I would also like to pitch in the docs, is there any initial progress @Dentrax I can help in some parts if you've already started
Hey @afro-coder, I couldn't find a free time to get this into yet, but if you want to start, go ahead! I don't want to block you. 💐
Sure @Dentrax, I'll get started on this and try to do some basic docs.
Not related to its architecture but I'm not even sure how I can switch in my Podman installations to netavark? Is this documented anywhere?
the absolute easiest way, assuming netavark and aardvark-dens are installed, is to run podman syystem reset
. HOWEVER, this will delete all of your images and containers from storage.
So is it part of any Podman installation? :thinking:
Doesn't it need any change in /usr/share/containers/containers.conf
or so?
How can I see that netavark
is used?
It's an optional dependency on most distros (we recommend that Podman packages require one of CNI or Netavark, with Netavark recommended).
Containers.conf can force the network backend to Netavark but is not necessary if a podman system reset
is done as Netavark is the default.
podman info
has a networkBackend
field that should show "cni" or "netavark" to identify which is in use.
Be aware that you get an error if you reset podman and the $HOME/.config/cni folder exists.
You need to manually remove it to switch podman to netavark.
Containers.conf can force the network backend to Netavark but is not necessary if a
podman system reset
is done as Netavark is the default.
podman info
has anetworkBackend
field that should show "cni" or "netavark" to identify which is in use.
Is this something that was introduced with Podman 4?
Because that field seem to be missing in Ubuntu 22.04 LTS (Linux 5.15.0-37
)
result of "podman info" (click to expand
$ podman info
host:
arch: amd64
buildahVersion: 1.23.1
cgroupControllers:
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: 'conmon: /usr/bin/conmon'
path: /usr/bin/conmon
version: 'conmon version 2.0.25, commit: unknown'
cpus: 3
distribution:
codename: jammy
distribution: ubuntu
version: "22.04"
eventLogger: journald
hostname: ubuntu
idMappings:
gidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 165536
size: 65536
uidmap:
- container_id: 0
host_id: 1001
size: 1
- container_id: 1
host_id: 165536
size: 65536
kernel: 5.15.0-37-generic
linkmode: dynamic
logDriver: journald
memFree: 2543235072
memTotal: 3127037952
ociRuntime:
name: crun
package: 'crun: /usr/bin/crun'
path: /usr/bin/crun
version: |-
crun version 1.4.5
commit: c381048530aa750495cf502ddb7181f2ded5b400
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
exists: true
path: /run/user/1001/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: 'slirp4netns: /usr/bin/slirp4netns'
version: |-
slirp4netns version 1.2.0
commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.3
swapFree: 0
swapTotal: 0
uptime: 39.1s
plugins:
log:
- k8s-file
- none
- journald
network:
- bridge
- macvlan
volume:
- local
registries: {}
store:
configFile: /home/app/.config/containers/storage.conf
containerStore:
number: 7
paused: 0
running: 0
stopped: 7
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/app/.local/share/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 7
runRoot: /run/user/1001/containers
volumePath: /home/app/.local/share/containers/storage/volumes
version:
APIVersion: 3.4.4
Built: 0
BuiltTime: Thu Jan 1 01:00:00 1970
GitCommit: ""
GoVersion: go1.17.3
OsArch: linux/amd64
Version: 3.4.4
and the config file contains these network related entries (Click to expand)
# Indicates the networking to be used for rootless containers
#
#rootless_networking = "slirp4netns"
# The network table contains settings pertaining to the management of
# CNI plugins.
[secrets]
#driver = "file"
[secrets.opts]
#root = "/example/directory"
[network]
# Path to directory where CNI plugin binaries are located.
#
#cni_plugin_dirs = [
# "/usr/local/libexec/cni",
# "/usr/libexec/cni",
# "/usr/local/lib/cni",
# "/usr/lib/cni",
# "/opt/cni/bin",
#]
# The network name of the default CNI network to attach pods to.
#
#default_network = "podman"
# The default subnet for the default CNI network given in default_network.
# If a network with that name does not exist, a new network using that name and
# this subnet will be created.
# Must be a valid IPv4 CIDR prefix.
#
#default_subnet = "10.88.0.0/16"
# Path to the directory where CNI configuration files are located.
#
#network_config_dir = "/etc/cni/net.d/"
# Path to the slirp4netns binary
#
#network_cmd_path = ""
# Default options to pass to the slirp4netns binary.
# For example "allow_host_loopback=true"
#
#network_cmd_options = []
Yes, this is only available in Podman 4.0 and up. Netavark in general is only supported from Podman 4.0 and up, so earlier versions don't need the field; they're always on CNI.
Ah okay. This explains everything. I missed this bit. It doesn't seem to be documented in the Readme.
In that case I'll have to wait until I found a way to install Podman 4.0 on Ubuntu 22.04. Thank you for the clarification!