openvas icon indicating copy to clipboard operation
openvas copied to clipboard

Mult-Container Issue Tracker

Open immauss opened this issue 3 years ago • 11 comments

Please use this Issue for any thoughts, notes, additions or problems with the mulit-container build.

The mulit-container branch "should" be operational now. If you would like to try it out, here's the path to take.

Clone the git repo.

Copy the multi-container directory to your location of preference ( or just 'cd' to it.)

modify the docker-compose.yml to your liking. Notes: - It is defaulted to SKIPSYNC=true, so no NVT sync is performed. - It also starts a "scannable" container. Check the scannable container logs for its IP and you can use it as a test target. This container has no ports exposed, but is on the same docker network, so you can still scan it. There is user (scannable) with password: Passw0rd Then get all the containers running with:

docker-compose up -d 

After a short time, check to make sure all of the containers are still running.

docker ps --all

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9d2af3a2359f immauss/openvas:mc01 "/scripts/start.sh o…" 4 days ago Up 4 days (healthy) openvas 134302a914af immauss/openvas:mc01 "/scripts/start.sh g…" 4 days ago Up 4 days (healthy) 0.0.0.0:8080->9392/tcp, :::8080->9392/tcp ovas_gsad b9f412a472d5 immauss/openvas:mc01 "/scripts/start.sh r…" 4 days ago Up 4 days (healthy) ovas_redis a5f17c8f7b3e immauss/openvas:mc01 "/scripts/start.sh g…" 4 days ago Up 4 days (healthy) ovas_gvmd fcf9abd0322f immauss/openvas:mc01 "/scripts/start.sh p…" 4 days ago Up 4 days (healthy) ovas_postgresql 8bda354fa528 immauss/scannable "/bin/bash /entrypoi…" 4 days ago Up 42 minutes scannable

You should see 5 openvas:mc01 containers and a single scannable. If they are all still running, you're good to go. BTW ... there is a seperate health check for each service, so the healthy status "should" be accurate.

immauss avatar Apr 25 '22 09:04 immauss

@immauss please Let me know if multi container setup is Woking ?

harshalgithub avatar May 07 '22 05:05 harshalgithub

@immauss I tried to run "docker-compose.yml" from "mc-test" without any configuration change, but "ovas_postgresql" container keeps RESTARTING state (around 1hr+ ), other all images are RUNNING state.

below logs for reference,

` ovas_postgresql | Choosing container start method from:

ovas_postgresql | postgresql

ovas_postgresql | Starting postgresql for gvmd !!

ovas_postgresql | Starting PostgreSQL...

ovas_postgresql | 2022-05-07 11:43:29.100 GMT [14] LOG: skipping missing configuration file "/data/database/postgresql.auto.conf"

ovas_postgresql | pg_ctl: directory "/data/database" is not a database cluster directory

ovas_gvmd | DB not ready yet

ovas_postgresql exited with code 1

ovas_gvmd | DB not ready yet

openvas | Waiting for redis

ovas_gvmd | DB not ready yet

ovas_gvmd | DB not ready yet

openvas | Waiting for redis

` C:\openvas-multi-container\mc-test>docker ps --all | find "immauss" daae3a0d799a immauss/openvas:mc-pg13 "/scripts/start.sh o…" 54 minutes ago Up 54 minutes (unhealthy) openvas 4c0465d1c547 immauss/openvas:mc-pg13 "/scripts/start.sh r…" 54 minutes ago Up 54 minutes (unhealthy) ovas_redis bb44d0769de2 immauss/openvas:mc-pg13 "/scripts/start.sh g…" 54 minutes ago Up 54 minutes (healthy) 0.0.0.0:8080->9392/tcp ovas_gsad 0b74a0af06f5 immauss/openvas:mc-pg13 "/scripts/start.sh g…" 54 minutes ago Up 54 minutes (unhealthy) ovas_gvmd 1db397bbc391 immauss/scannable "/bin/bash /entrypoi…" 54 minutes ago Up 54 minutes scannable 6fd98b0113bd immauss/openvas:mc-pg13 "/scripts/start.sh p…" 54 minutes ago Restarting (1) 58 seconds ago ovas_postgresql

harshalgithub avatar May 07 '22 12:05 harshalgithub

OK ... my bad ...

I've updated the process in the original post for this issue. Problem was ... I started working on a migration path to postgres 13. I checked with Greenbone, and I'm expecting the next iteration to be using 13, so started working on what I hope will be a smoother migration for users. And of course, I used the mc-test directory .....

I've added a working docker-compose.yml for mulit-container setup to the master branch in the "mulit-container" folder.

The other one references the still failing auto upgrade. (It's really close though... )

immauss avatar May 08 '22 08:05 immauss

Hi @immauss ,

Just tried to execute "docker-compose.yml" from "multi-container" folder, its executed. Can you let me know login password, I tried default admin/admin, not working.

logs for reference

`Choosing container start method from:

gsad

Starting Greenbone Security Assitannt !!

Starting Greenbone Security Assistant...

(gsad:79): gsad gmp-WARNING **: 18:10:28.372: Authentication failure for 'admin' from 172.20.0.1. Status was 1. (gsad:79): libgvm util-WARNING **: 18:10:28.529: Failed to get server addresses for ovas_gvmd: Unknown error (gsad:79): gsad gmp-WARNING **: 18:10:28.529: Authentication failure for 'admin' from 172.20.0.1. Status was 1. Oops, secure memory pool already initialized gsad main-Message: 18:10:37.991: Starting GSAD version 21.4.4 (gsad:13): libgvm util-WARNING **: 18:11:34.283: Failed to get server addresses for ovas_gvmd: Unknown error (gsad:13): gsad gmp-WARNING **: 18:11:34.283: Authentication failure for 'admin' from 172.20.0.1. Status was 1. (gsad:13): libgvm util-WARNING **: 18:16:00.801: Failed to get server addresses for ovas_gvmd: Unknown error (gsad:13): gsad gmp-WARNING **: 18:16:00.801: Authentication failure for 'admin' from 172.20.0.1. Status was 1`

stucked like above logs..

Thanks. I have commented on #109 , Please check once.

harshalgithub avatar May 08 '22 14:05 harshalgithub

it "should" be admin:admin by default. make sure you are not reusing the volumes ( This actually ran me in circles for days will trying to test the auto upgrades for postgresql 13 .... )

I've made the habbit of just removing the volumes before starting things up when testing to make sure I have a clean build.

-Scott

immauss avatar May 29 '22 20:05 immauss

And for anyone else looking around here .... mc-pg13 is working great in my production for almost a week now on postgres 13 !! Lots of testing with the multi-container build and very soon it will become the main branch!

if you have had any issues, or question ... please add here.

Thanks, Scott

immauss avatar May 29 '22 20:05 immauss

I've been fighting with trying to get this going.

I had been running single container, but as part of the pg13 testing, I thought that I'd try multi-container too. I've experienced a ton of issues getting gvmd and openvas to start - it looks like one of them is clobbering /run/redis/redis.sock, which breaks things.

Used https://github.com/immauss/openvas/blob/master/mc-test/docker-compose.yml. Postgres is nice and happy.

gvmd error?

Starting Greenbone Vulnerability Manager...
gvmd  -a 0.0.0.0  -p 9390 --listen-group=gvm  --osp-vt-update=/run/ospd/ospd.sock --max-email-attachment-size=64000000 --max-email-include-size=64000000 --max-email-message-size=64000000
Waiting for gvmd
Waiting for gvmd
Waiting for gvmd
admin
Time to fixup the gvm accounts.
Starting Postfix for report delivery by email
Starting Postfix Mail Transport Agent: postfix.
md   main:WARNING:2022-05-31 18h16.55 utc:50: gvmd: Another process is busy starting up
md   main:MESSAGE:2022-05-31 18h16.56 utc:197:    Greenbone Vulnerability Manager version 21.4.5 (DB revision 242)
md   main:WARNING:2022-05-31 18h16.56 utc:197: gvmd: Another process is busy starting up
md   main:MESSAGE:2022-05-31 18h16.56 utc:201:    Greenbone Vulnerability Manager version 21.4.5 (DB revision 242)
md   main:WARNING:2022-05-31 18h16.56 utc:201: gvmd: Another process is busy starting up
md manage:WARNING:2022-05-31 18h16.56 UTC:55: osp_scanner_feed_version: failed to connect to /run/ospd/ospd.sock
md   main:MESSAGE:2022-05-31 18h16.56 utc:59:    Greenbone Vulnerability Manager version 21.4.5 (DB revision 242)
md manage:   INFO:2022-05-31 18h16.56 utc:59:    Getting users.
md manage:   INFO:2022-05-31 18h16.57 UTC:54: update_scap: Updating data from feed
md manage:   INFO:2022-05-31 18h16.57 UTC:54: Updating CPEs
md   main:MESSAGE:2022-05-31 18h17.04 utc:450:    Greenbone Vulnerability Manager version 21.4.5 (DB revision 242)
md   main:WARNING:2022-05-31 18h17.04 utc:450: gvmd: Main process is already running
Choosing container start method from:
gvmd
Starting Greenbone Vulnerability Manager daemon !!

Then the container restarts. Using just a docker volume for /run, as defined in the docker-compose file. Using a filesystem for /data.

Single container with the mc-pg13 tag works fine, but I haven't well tested pg13.

kjake avatar May 31 '22 19:05 kjake

Re-trying as a single container with the new image and I'm unable to start scans.

==> /usr/local/var/log/gvm/gvmd.log <==
md manage:   INFO:2022-06-08 15h55.14 UTC:493: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h55.24 UTC:499: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h55.34 UTC:503: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h55.44 UTC:506: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h55.54 UTC:509: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h56.04 UTC:521: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h56.14 UTC:524: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h56.24 UTC:527: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h56.35 UTC:531: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h56.45 UTC:534: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h56.55 UTC:537: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h57.05 UTC:549: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h57.15 UTC:552: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h57.25 UTC:555: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
==> /usr/local/var/log/gvm/openvas.log <==
libgvm util:MESSAGE:2022-06-08 15h57.29 utc:497: Updated NVT cache from version 0 to 202205311018
==> /usr/local/var/log/gvm/gvmd.log <==
md manage:   INFO:2022-06-08 15h57.35 UTC:559: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h57.45 UTC:562: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h57.55 UTC:565: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
gsad  gmp-Message: 15:58:00.933: Authentication success for 'admin' from 192.168.2.31
md manage:   INFO:2022-06-08 15h58.05 UTC:653: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
event task:MESSAGE:2022-06-08 15h58.05 UTC:651: Status of task Scan Offsite (ef11c6ec-8edb-4065-8699-f4901cbdea88) has changed to Requested
event task:MESSAGE:2022-06-08 15h58.05 UTC:651: Task Scan Offsite (ef11c6ec-8edb-4065-8699-f4901cbdea88) has been requested to start by admin
md manage:   INFO:2022-06-08 15h58.21 UTC:737: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:   INFO:2022-06-08 15h58.31 UTC:741: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting
md manage:WARNING:2022-06-08 15h58.33 UTC:657: Could not connect to Scanner at /run/ospd/ospd-openvas.sock
md manage:WARNING:2022-06-08 15h58.33 UTC:657: OSP start_scan f3f6089c-eae0-496a-aeab-a2ccbc54347b: Could not connect to Scanner
event task:MESSAGE:2022-06-08 15h58.33 UTC:657: Status of task Scan Offsite (ef11c6ec-8edb-4065-8699-f4901cbdea88) has changed to Done
event task:MESSAGE:2022-06-08 15h58.33 UTC:657: Status of task Scan Offsite (ef11c6ec-8edb-4065-8699-f4901cbdea88) has changed to Interrupted
md manage:   INFO:2022-06-08 15h58.41 UTC:744: osp_scanner_feed_version: failed to get scanner_feed_version. OSPd OpenVAS is still starting

kjake avatar Jun 08 '22 15:06 kjake

@kjake , did you have any luck with the most recent?

the ‘21.04.09’ tag is the most recent multi-container with pg13.

-Scott

immauss avatar Jun 16 '22 07:06 immauss

Hey Scott, I was away on vacation at this time. Let me re-test in the coming week and get back with you. I had reverted to immauss/openvas:latest, and I'm seeing one issue in that build (my tasks become unscheduled).

kjake avatar Jun 30 '22 16:06 kjake

No worries. I hope you had a nice relaxing time .... Mine is coming soon and REALLY looking forward to it. :)

-Scott

immauss avatar Jul 03 '22 16:07 immauss

Closing out in favor of https://github.com/immauss/openvas/issues/139

immauss avatar Sep 01 '22 04:09 immauss