containerlab
containerlab copied to clipboard
If SR OS node startup config contains a custom admin password, the vrnetlab bootstrap fails as it uses the default `admin:admin` credentials
Hello,
I have a problem with the health status in SROS deployments with the containerlab. The containerlab deploy
command hangs until the container health status turns healthy
. However, the containerlab container hangs infinitely. It's not turning healthy. If I use --skip-post-deploy
, it works but I cannot use it because of the project requirements.
I discovered that is coming from my configuration file. I am setting up a custom admin password in the configuration file. If I change this custom admin password to admin, then the container health status turns to healthy
, and the containerlab deploy
command successfully ends.
I was looking at how vrnetlab and containerlab use the username and the password for the health check. I can see vrnetlab library modify the /health
file with the update_health function. But the username
or password
definition is not used in this VR class.
I also know containerlab is only responsible for checking the health
file content. It's using this file to define container health status.
The --username
and --password
are used in launch.py in the vrnetlab/sros/ directory. I tried to add these variables in the containerlab container entry point like below. However, it is not working:
....
topology:
nodes:
sros1:
kind: nokia_sros
mgmt-ipv4: 172.100.101.21
entrypoint: /launch.py --trace --connection-mode tc --hostname sros1 --username admin --password mY_Custom_Pass --variant sr-1
startup-config: sros1.cfg
I also can see that vrnetlab does health checks with these commands. I tried this command in the containerlab container. But, I cannot take any response. At the same time, I can open a terminal with telnet in the containerlab container.
socat TCP-LISTEN:22,fork TCP:127.0.0.1:22
socat UDP-LISTEN:161,fork UDP:127.0.0.1:161
socat TCP-LISTEN:830,fork TCP:127.0.0.1:830
socat TCP-LISTEN:80,fork TCP:127.0.0.1:80
socat TCP-LISTEN:443,fork TCP:127.0.0.1:443
I just want to know how can I fix the containerlab SROS container's health status. I am using the Nokia SROS 24.03 in the Ubuntu-22.04. Containerlab version 0.50.0. But I tried with version 0.54.2. In both cases, vr-sros health status is not turning healthy with a custom password.
Hi @bayars
you can do docker logs -f <container-name>
and see where the provisioning scripts hangs when trying to apply the default configuration.
Hi @hellt
docker logs hangs there:
2024-04-24 12:31:23,105: launch TRACE OUTPUT: Initial DNS resolving preference is ipv6-first
SMP: 2 cores available
2024-04-24 12:31:35,120: launch TRACE OUTPUT:
Loading primary configuration file "tftp://172.31.255.29/config.txt"
Loaded 187 lines in 0.2 seconds from file "tftp://172.31.255.29/config.txt"
Committing configuration
2024-04-24 12:31:36,122: launch DEBUG matched login prompt
2024-04-24 12:31:36,122: vrnetlab DEBUG writing to serial console: 'admin'
2024-04-24 12:31:36,122: launch TRACE waiting for 'Password:' on serial console
2024-04-24 12:31:36,126: launch TRACE read from serial console: ' admin
Password:'
2024-04-24 12:31:36,126: vrnetlab DEBUG writing to serial console: 'admin'
2024-04-24 12:31:36,126: launch TRACE waiting for '# ' on serial console
If I don't give any configuration, it's using default password for admin. I guess this password comes when bof committed. The health status turns to healthy
, and the containerlab deploy
successfully ends.
I also tried these things:
- Use a config without defining the admin password in the configuration file. The health status turns to
healthy
. Butcontainerlab deploy
hangs and the configuration not loaded in the SROS node. - Give a valid plaintext password in the config. The container health status stays
unhealthy
, andcontainerlab deploy
hangs. The configuration loaded and the SROS node accessible. The SROS logs same as above logs.
yes, currently the custom password is not handled by the vrnetlab/containerlab and relies on the default password (admin
).
I will rename this issue appropriately to track this issue