External PostgreSQL CPU 100% for 7+ hours after initial setup
Current Behavior
In GCP Cloud SQL this is the state of the DB from initial setup
I have 1 CPU and it's constantly 100%
Its doing a lot of transactions as well.
Logs are similar to the following:
SELECT "A0"."CPE22","A0"."CPE23","A0"."EDITION","A0"."LANGUAGE","A0"."OTHER","A0"."PART","A0"."PRODUCT","A0"."PURL","A0"."PURL_NAME","A0"."PURL_NAMESPACE","A0"."PURL_QUALIFIERS","A0"."PURL_SUBPATH","A0"."PURL_TYPE","A0"."PURL_VERSION","A0"."SWEDITION","A0"."TARGETHW","A0"."TARGETSW","A0"."UPDATE","A0"."UUID","A0"."VENDOR","A0"."VERSION","A0"."VERSIONENDEXCLUDING","A0"."VERSIONENDINCLUDING","A0"."VERSIONSTARTEXCLUDING","A0"."VERSIONSTARTINCLUDING","A0"."VULNERABLE" FROM "VULNERABLESOFTWARE" "A0" WHERE "A0"."ID" = $1
bind <unnamed>: SHOW TRANSACTION ISOLATION LEVEL
Steps to Reproduce
- Use latest apiserver image
- Create Cloud SQL postgreSQL 14 instance
- Run apiserver
- DB goes boom
Expected Behavior
If any initial setup is required, it should not be done in hundreds of thousands of transactions
Dependency-Track Version
4.10.1
Dependency-Track Distribution
Container Image
Database Server
PostgreSQL
Database Server Version
14.9
Browser
Google Chrome
Checklist
- [X] I have read and understand the contributing guidelines
- [X] I have checked the existing issues for whether this defect was already reported
The query in question is executed during NVD mirroring. Unfortunately it is a heavy operation by nature, as it has to create and / or update hundreds of thousands of vulnerability records, as well as records for vulnerable versions and version ranges. It's a lot of data.
I agree on the transactions part though, in fact I have made an effort to reduce transaction commits when I implemented support for mirroring the NVD via REST API. Please give it a try: https://docs.dependencytrack.org/datasources/nvd/#mirroring-via-nvd-rest-api
Note that you may need to restart the container in order for the change to kick in sooner.