harbor
harbor copied to clipboard
image count is less during scan
Harbor version: 2.5.4
When initiating scan, not all the images are being scanned. It stops at 197 every time.
Actual total images ~ 4-5k
I tried the below:
- Clearing out scan reports both in Trivy and Harbor db.
- Restarting job service container.
- Clearing Harbor redis.
can you perform the query in DB(select count(*) from artifact
and show the result? did you notice any error in core or jobservice.log?
@wy65701436 Today when I did the scan the count increased to 245.
Artifact count:
Errors in core log:
error a previous scan process is Pending
x4
No errors in jobservice log.
I had ran these commands earlier because we were seeing many artifacts were not being scanned and reports were missing.
TRUNCATE vulnerability_record CASCADE;
TRUNCATE report_vulnerability_record;
TRUNCATE scan_report CASCADE;
@piyush94 could you please help to find the log containing the following message:
but no scan job submitted to the job service
Hello,
Same here, the full scan is not working properly since Sunday November 13th. It doesn't take all the artifacts in the SCAN_ALL scheduled action (planned every sunday for us).
We had some queries in place in charge of cleaning the task and execution tables --> we've stopped this last week (Nov 15th) in order to see if it has some impact on the behavior, No change SCAN_ALL is still KO.
on the 15th we had to implement a script in order to launch the IMAGE_SCAN on every tag in the registry --> works like a charm when the scanner is performing scan on every single image.
Here are the stats of the DB :
breghr1=> select count(*), vendor_type, status from task group by vendor_type, status;
count | vendor_type | status
-------+--------------------+---------
234 | IMAGE_SCAN | Error
87277 | IMAGE_SCAN | Success
15 | SCHEDULER | Success
3 | REPLICATION | Error
227 | RETENTION | Success
35 | GARBAGE_COLLECTION | Success
190 | REPLICATION | Success
3 | IMAGE_SCAN | Running
6317 | SCAN_ALL | Success
(9 rows)
breghr1=> select count(*), to_char(creation_time, 'YYYY-MM-DD'), status from task where vendor_type='SCAN_ALL' group by to_char(creation_time, 'YYYY-MM-DD'), status;
count | to_char | status
-------+------------+---------
4327 | 2022-11-15 | Success
1990 | 2022-11-20 | Success
(2 rows)
breghr1=>select count(*), to_char(creation_time, 'YYYY-MM-DD'), status from task where vendor_type='IMAGE_SCAN' group by to_char(creation_time, 'YYYY-MM-DD'), status ORDER BY to_char(creation_time, 'YYYY-MM-DD') ASC;
count | to_char | status
-------+------------+---------
8 | 2022-10-30 | Success
454 | 2022-10-31 | Success
101 | 2022-11-01 | Success
723 | 2022-11-02 | Success
959 | 2022-11-03 | Success
762 | 2022-11-04 | Success
162 | 2022-11-05 | Success
121 | 2022-11-06 | Success
788 | 2022-11-07 | Success
888 | 2022-11-08 | Success
1 | 2022-11-09 | Error
1039 | 2022-11-09 | Success
811 | 2022-11-10 | Success
91 | 2022-11-11 | Success
63 | 2022-11-12 | Success
72 | 2022-11-13 | Success
868 | 2022-11-14 | Success
13 | 2022-11-15 | Error
3 | 2022-11-15 | Running
1669 | 2022-11-15 | Success
140 | 2022-11-16 | Error
35981 | 2022-11-16 | Success
79 | 2022-11-17 | Error
40539 | 2022-11-17 | Success
1 | 2022-11-18 | Error
909 | 2022-11-18 | Success
103 | 2022-11-19 | Success
91 | 2022-11-20 | Success
76 | 2022-11-21 | Success
(29 rows)
breghr1=>
breghr1=>
breghr1=>
breghr1=> select count(*) from artifact;
count
-------
69182
(1 row)
Standard behaviour in the image_scan_queue_size when it works (10/30/2022):
Last sunday behaviour (11/20/2022):
@zyyw I am not seeing this message in the logs. But one thing is happening that the count is increasing each day. Now it's at 345.
it seems to be related to this issue : https://github.com/goharbor/harbor/issues/17455.
The issue seemed to start right after we pushed our first signed image with Cosign --> we have only one signed image in the registry over 70k images.
Once we've removed the signed image :
This issue is being marked stale due to a period of inactivity. If this issue is still relevant, please comment or remove the stale label. Otherwise, this issue will close in 30 days.
This issue was closed because it has been stalled for 30 days with no activity. If this issue is still relevant, please re-open a new issue.