Sudden failure to decrypt the OSS Index API Token
Current Behavior
We're seeing something that resembles an expiration of the OSS Index API Token: after running without issues for a couple of days with our API Token (created at https://ossindex.sonatype.org/user/settings) , we suddenly see messages such as these in the logs:
2022-12-28 08:00:27,749 ERROR [OssIndexAnalysisTask] An error occurred decrypting the OSS Index API Token. Skipping javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption. at java.base/com.sun.crypto.provider.CipherCore.unpad(Unknown Source) at java.base/com.sun.crypto.provider.CipherCore.fillOutputBuffer(Unknown Source) at java.base/com.sun.crypto.provider.CipherCore.doFinal(Unknown Source) at java.base/com.sun.crypto.provider.AESCipher.engineDoFinal(Unknown Source) at java.base/javax.crypto.Cipher.doFinal(Unknown Source) at alpine.security.crypto.DataEncryption.decryptAsBytes(DataEncryption.java:136) at alpine.security.crypto.DataEncryption.decryptAsBytes(DataEncryption.java:148) at alpine.security.crypto.DataEncryption.decryptAsString(DataEncryption.java:176) at org.dependencytrack.tasks.scanners.OssIndexAnalysisTask.inform(OssIndexAnalysisTask.java:138) at org.dependencytrack.tasks.VulnerabilityAnalysisTask.performAnalysis(VulnerabilityAnalysisTask.java:169) at org.dependencytrack.tasks.VulnerabilityAnalysisTask.analyzeComponents(VulnerabilityAnalysisTask.java:128) at org.dependencytrack.tasks.VulnerabilityAnalysisTask.inform(VulnerabilityAnalysisTask.java:77) at alpine.event.framework.BaseEventService.lambda$publish$0(BaseEventService.java:101) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
When this happens no new (OSS Index fed) vulnerabilities are detected until we create a new API token and and set it in /admin -> Analyzers -> Sonatype OSS Index
Steps to Reproduce
- Create an API Token at https://ossindex.sonatype.org/user/settings
- Enter the API token in /admin -> Analyzers -> Sonatype OSS Index
- Let DTrack run for a while (a week or two) while you upload SBOMs
- Check the logs
Expected Behavior
The OSS Index API Token never "expires"
Dependency-Track Version
4.7.0
Dependency-Track Distribution
Container Image
Database Server
PostgreSQL
Database Server Version
13.7
Browser
N/A
Checklist
- [X] I have read and understand the contributing guidelines
- [X] I have checked the existing issues for whether this defect was already reported
Thanks for reporting @carniz!
This error happens when the secret couldn't be decrypted using the secret key (stored in /data/keys/secret.key). A common cause is restoring the database from backup, but not also restoring the secret key that was used to encrypt the values in the database.
Few questions to better understand the situation:
- Did you upgrade from a previous DT version, or is this an entirely new instance?
- Have you mounted the container's
/datavolume somehow so it is persisted across container restarts? - Has the instance been running the whole time, or was it restarted somewhere in between?
- Can you correlate the behavior you're seeing with container restarts, or is this completely unrelated?
I am seeing the same issue - not the first time. But what @nscuro said makes sense. I don't have the /data volume mounted, so if there is a key persisted there, any upgrade will make this fail? Is that somewhere in the docs? I thought mounting it is optional as it holds only caches :(
See https://docs.dependencytrack.org/getting-started/data-directory/
The "optional" part is overriding the location of the data directory. But the directory itself is mandatory. https://docs.dependencytrack.org/getting-started/deploy-docker/
Although that doc page 👆 does not mention the secret key and its purpose. Will need to add that. I was also planning to add "backup & restore" documentation.
Anyway, there's some additional documentation regarding the secret key here: https://docs.dependencytrack.org/getting-started/configuration/#secret-key
Yes please add some hint about this in the docs, that would help :)
Few questions to better understand the situation:
- Did you upgrade from a previous DT version, or is this an entirely new instance?
This is an entirely new instance that was freshly launched with DT 4.7.0 on Dec 21.
- Have you mounted the container's
/datavolume somehow so it is persisted across container restarts?
Yes: the /data directory of the container is bind-mounted to a named volume:
# docker-compose config | grep -A38 dtrack-apiserver | head -39 | sed "s/\(ALPINE_.*:\) \(.*\)/\1 here be dragons/"
dtrack-apiserver:
deploy:
resources:
limits:
memory: 24576m
reservations:
memory: 16384m
restart_policy:
condition: on-failure
environment:
ALPINE_DATABASE_DRIVER: here be dragons
ALPINE_DATABASE_MODE: here be dragons
ALPINE_DATABASE_PASSWORD: here be dragons
ALPINE_DATABASE_POOL_ENABLED: here be dragons
ALPINE_DATABASE_POOL_IDLE_TIMEOUT: here be dragons
ALPINE_DATABASE_POOL_MAX_LIFETIME: here be dragons
ALPINE_DATABASE_POOL_MAX_SIZE: here be dragons
ALPINE_DATABASE_POOL_MIN_IDLE: here be dragons
ALPINE_DATABASE_URL: here be dragons
ALPINE_DATABASE_USERNAME: here be dragons
ALPINE_OIDC_CLIENT_ID: here be dragons
ALPINE_OIDC_ENABLED: here be dragons
ALPINE_OIDC_ISSUER: here be dragons
ALPINE_OIDC_TEAMS_CLAIM: here be dragons
ALPINE_OIDC_TEAM_SYNCHRONIZATION: here be dragons
ALPINE_OIDC_USERNAME_CLAIM: here be dragons
ALPINE_OIDC_USER_PROVISIONING: here be dragons
image: dependencytrack/apiserver:4.7.0
logging:
driver: json-file
options:
max-file: '20'
max-size: 100m
ports:
- published: 8081
target: 8080
restart: unless-stopped
volumes:
- dependency-track:/data:rw
- Has the instance been running the whole time, or was it restarted somewhere in between?
The instance has been up since it was launched on Dec 21.
- Can you correlate the behavior you're seeing with container restarts, or is this completely unrelated?
I wish I would have been able to see such correlation (since it also was my first suspicion), but from what I can see (using docker ps) it seems as the container has been running for at least 2 weeks: the CREATED column says "2 weeks ago" and STATUS says "Up 2 weeks". The granularity of docker ps does however not show the exact start time of the container process, so it might have been started anywhere between Dec 21 and Dec 27 depending on the semantics of "Up 2 weeks" (i.e. "exactly" | "more than")
For completeness:
# ls -l $(docker volume inspect compose_dependency-track | jq -r '.[].Mountpoint')/.dependency-track/keys
total 12
-rw-r--r-- 1 ubuntu ubuntu 2373 Dec 21 09:27 private.key
-rw-r--r-- 1 ubuntu ubuntu 550 Dec 21 09:27 public.key
-rw-r--r-- 1 ubuntu ubuntu 32 Dec 21 09:27 secret.key
And I figured out a way to display the detailed timestamps:
# docker inspect -f '{{ .Created }}' compose_dtrack-apiserver_1
2022-12-21T09:26:50.682136863Z
And here's an interesting detail:
# docker inspect -f '{{ .State.StartedAt }}' compose_dtrack-apiserver_1
2023-01-08T12:47:57.504223704Z
So, the apiserver restarted on Sunday (for unknown reasons ATM), and the OssIndexAnalysisTask seems to be able to use the API token just fine - so far:
# apiserver-logs | grep OssIndexAnalysisTask | tail -10
2023-01-11 08:43:45,544 INFO [OssIndexAnalysisTask] Analyzing 6 component(s)
2023-01-11 08:43:45,544 INFO [OssIndexAnalysisTask] Sonatype OSS Index analysis complete
2023-01-11 08:44:04,664 INFO [OssIndexAnalysisTask] Analyzing 128 component(s)
2023-01-11 08:44:15,076 INFO [OssIndexAnalysisTask] Analyzing 1 component(s)
2023-01-11 08:44:15,076 INFO [OssIndexAnalysisTask] Sonatype OSS Index analysis complete
2023-01-11 08:44:39,850 INFO [OssIndexAnalysisTask] Analyzing 128 component(s)
2023-01-11 08:45:02,901 INFO [OssIndexAnalysisTask] Analyzing 128 component(s)
2023-01-11 08:45:21,489 INFO [OssIndexAnalysisTask] Analyzing 128 component(s)
2023-01-11 08:45:45,410 INFO [OssIndexAnalysisTask] Analyzing 12 component(s)
2023-01-11 08:45:45,410 INFO [OssIndexAnalysisTask] Sonatype OSS Index analysis complete
The token has thus survived a restart, or at least this specific restart, without problems. On the other hand, something happened on Dec 28 that made it unable to decrypt the value from the DB.
Hi,
I have this issue too when running dtrack on Docker Swarm. After a couple of days I have to restart api service because api is unreachable and in logs I see ERROR [OssIndexAnalysisTask] An error occurred decrypting the OSS Index API Token. Skipping javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
FYI data is persisted in a dedicated Docker Volume and I use DTrack version 4.7.0.
Any workaround ?
We have this issue too - the OSS index service api key would run for awhile then it stopped working (when the same error being throwed out). The only way to resolve it is to regen another token from sonatype user profile. I think there is a rate-limit threadhold in the api consumption and it would stop working after the threadhold met.
I did try to contact Sonatype for purchasing a commercial license on this service only. But they would not be able sell it without purchase the whole product suite. It's frustrating but that's the trade-off of 'free' service.
This is still a thing in 4.8.2 with a mounted /data directory. Please get a fix
To everyone experiencing this issue, please keep an eye on your logs after upgrading to 4.11. We introduced changes in that version to provide more information if this error occurs.
Also note that we will now fall back to unauthenticated usage of OSS Index, if decryption of the API key fails. So you'll not experience a full-blown service disruption anymore.
I tried a lot of things to reproduce, but was unable to do so. Sharing extended exception details as introduced in 4.11 would help me to pinpoint where the issue might be.
Final request to please report back if this issue occurred again.
Note that the additional logging we added directly links to this issue. The fact that no one commented since then makes me think the issue is resolved.
I didn't see it anymore at the moment.
Hi, I just got the below exception on v4.12. Which DB table should I be checking for this value?
2024-10-21 09:32:33,988 WARN [DataEncryption] Failed to decrypt value, possibly because it has been encrypted with a different key, or the encrypted value is corrupted. To verify the latter, please check the respective value in the database. For comparison, the base64-encoded value for which decryption was attempted has a length of 88, and a sha256 digest of XXX. If this is different to what's stored in your database, please report this to https://github.com/DependencyTrack/dependency-track/issues/2366, so the root cause can be identified. 2024-10-21 09:32:33,988 WARN [NistApiMirrorTask] Failed to decrypt API key; Continuing without authentication javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption. at java.base/com.sun.crypto.provider.CipherCore.unpad(Unknown Source) at java.base/com.sun.crypto.provider.CipherCore.fillOutputBuffer(Unknown Source) at java.base/com.sun.crypto.provider.CipherCore.doFinal(Unknown Source) at java.base/com.sun.crypto.provider.AESCipher.engineDoFinal(Unknown Source) at java.base/javax.crypto.Cipher.doFinal(Unknown Source) at alpine.security.crypto.DataEncryption.decryptAsBytes(DataEncryption.java:136) at alpine.security.crypto.DataEncryption.decryptAsBytes(DataEncryption.java:148) at alpine.security.crypto.DataEncryption.decryptAsString(DataEncryption.java:176) at org.dependencytrack.util.DebugDataEncryption.retryDecryptAsString(DebugDataEncryption.java:59) at org.dependencytrack.util.DebugDataEncryption.decryptAsString(DebugDataEncryption.java:54) at org.dependencytrack.tasks.NistApiMirrorTask.lambda$inform$1(NistApiMirrorTask.java:120) at java.base/java.util.Optional.map(Unknown Source) at org.dependencytrack.tasks.NistApiMirrorTask.inform(NistApiMirrorTask.java:118) at alpine.event.framework.BaseEventService.lambda$publish$0(BaseEventService.java:110) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.base/java.lang.Thread.run(Unknown Source)
Still happening. Out of habit, I just updated the stored password again before checking the db, so don't have the requested information this time, sorry!
Only things of note were that it was the first bom imported since the service was run from a new instance of the latest docker image, running in an Azure App Service.
2024-10-30T10:05:59.415199266Z 2024-10-30 10:05:59,414 INFO [InternalAnalysisTask] Internal analysis complete
2024-10-30T10:05:59.437675086Z 2024-10-30 10:05:59,437 WARN [DataEncryption] Failed to decrypt value, possibly because it has been encrypted with a different key, or the encrypted value is corrupted. To verify the latter, please check the respective value in the database. For comparison, the base64-encoded value for which decryption was attempted has a length of 88, and a sha256 digest of fc90be9b3c9aa03d2952a9aab3f6cf975eebb39a4f8dd9bf01c6b0256e5f8f81. If this is different to what's stored in your database, please report this to https://github.com/DependencyTrack/dependency-track/issues/2366, so the root cause can be identified.
2024-10-30T10:05:59.444079835Z 2024-10-30 10:05:59,442 ERROR [OssIndexAnalysisTask] An error occurred decrypting the OSS Index API Token; Continuing without authentication
2024-10-30T10:05:59.444105935Z javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
2024-10-30T10:05:59.444112835Z at java.base/com.sun.crypto.provider.CipherCore.unpad(Unknown Source)
2024-10-30T10:05:59.444118435Z at java.base/com.sun.crypto.provider.CipherCore.fillOutputBuffer(Unknown Source)
2024-10-30T10:05:59.444123435Z at java.base/com.sun.crypto.provider.CipherCore.doFinal(Unknown Source)
2024-10-30T10:05:59.444127835Z at java.base/com.sun.crypto.provider.AESCipher.engineDoFinal(Unknown Source)
2024-10-30T10:05:59.444140734Z at java.base/javax.crypto.Cipher.doFinal(Unknown Source)
2024-10-30T10:05:59.444145034Z at alpine.security.crypto.DataEncryption.decryptAsBytes(DataEncryption.java:136)
2024-10-30T10:05:59.444149634Z at alpine.security.crypto.DataEncryption.decryptAsBytes(DataEncryption.java:148)
2024-10-30T10:05:59.444153934Z at alpine.security.crypto.DataEncryption.decryptAsString(DataEncryption.java:176)
2024-10-30T10:05:59.444158234Z at org.dependencytrack.util.DebugDataEncryption.retryDecryptAsString(DebugDataEncryption.java:59)
2024-10-30T10:05:59.444163334Z at org.dependencytrack.util.DebugDataEncryption.decryptAsString(DebugDataEncryption.java:54)
2024-10-30T10:05:59.444215834Z at org.dependencytrack.tasks.scanners.OssIndexAnalysisTask.inform(OssIndexAnalysisTask.java:162)
2024-10-30T10:05:59.444229034Z at org.dependencytrack.tasks.VulnerabilityAnalysisTask.performAnalysis(VulnerabilityAnalysisTask.java:197)
2024-10-30T10:05:59.444233734Z at org.dependencytrack.tasks.VulnerabilityAnalysisTask.analyzeComponents(VulnerabilityAnalysisTask.java:155)
2024-10-30T10:05:59.444238134Z at org.dependencytrack.tasks.VulnerabilityAnalysisTask.inform(VulnerabilityAnalysisTask.java:87)
2024-10-30T10:05:59.444288033Z at alpine.event.framework.BaseEventService.lambda$publish$0(BaseEventService.java:110)
2024-10-30T10:05:59.444307433Z at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
2024-10-30T10:05:59.444312433Z at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
2024-10-30T10:05:59.444316733Z at java.base/java.lang.Thread.run(Unknown Source)
2024-10-30T10:05:59.447427808Z 2024-10-30 10:05:59,447 INFO [OssIndexAnalysisTask] Starting Sonatype OSS Index analysis task
2024-10-30T10:06:15.422313005Z 2024-10-30 10:06:15,421 INFO [OssIndexAnalysisTask] Analyzing 128 component(s)
2024-10-30T10:06:25.008763782Z 2024-10-30 10:06:25,008 INFO [OssIndexAnalysisTask] Analyzing 128 component(s)
2024-10-30T10:06:35.205316676Z 2024-10-30 10:06:35,204 INFO [OssIndexAnalysisTask] Analyzing 128 component(s)
2024-10-30T10:06:45.262288116Z 2024-10-30 10:06:45,261 INFO [OssIndexAnalysisTask] Analyzing 128 component(s)
2024-10-30T10:06:56.530385874Z 2024-10-30 10:06:56,529 INFO [OssIndexAnalysisTask] Analyzing 128 component(s)
2024-10-30T10:07:06.774269161Z 2024-10-30 10:07:06,773 INFO [OssIndexAnalysisTask] Analyzing 128 component(s)
2024-10-30T10:07:12.458142859Z 2024-10-30 10:07:12,457 INFO [OssIndexAnalysisTask] Analyzing 73 component(s)
Dependenct-Track v4.12
I see following logs
2025-01-17 13:14:19,758 INFO [NistMirrorTask] Initiating download of https://nvd.nist.gov/feeds/json/cve/1.1/nvdcve-1.1-2025.json.gz
2025-01-17 13:14:19,770 WARN [DataEncryption] Failed to decrypt value, possibly because it has been encrypted with a different key, or the encrypted value is corrupted. To verify the latter, please check the respective value in the database. For comparison, the base64-encoded value for which decryption was attempted has a length of 88, and a sha256 digest of 3a6abb59e6babd2240583174aa0c736b21ef3ed223b54dc8497090c0c2f83d89. If this is different to what's stored in your database, please report this to https://github.com/DependencyTrack/dependency-track/issues/2366, so the root cause can be identified.
2025-01-17 13:14:19,771 WARN [NistApiMirrorTask] Failed to decrypt API key; Continuing without authentication
javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
at java.base/com.sun.crypto.provider.CipherCore.unpad(Unknown Source)
at java.base/com.sun.crypto.provider.CipherCore.fillOutputBuffer(Unknown Source)
at java.base/com.sun.crypto.provider.CipherCore.doFinal(Unknown Source)
at java.base/com.sun.crypto.provider.AESCipher.engineDoFinal(Unknown Source)
at java.base/javax.crypto.Cipher.doFinal(Unknown Source)
at alpine.security.crypto.DataEncryption.decryptAsBytes(DataEncryption.java:136)
at alpine.security.crypto.DataEncryption.decryptAsBytes(DataEncryption.java:148)
at alpine.security.crypto.DataEncryption.decryptAsString(DataEncryption.java:176)
at org.dependencytrack.util.DebugDataEncryption.retryDecryptAsString(DebugDataEncryption.java:59)
at org.dependencytrack.util.DebugDataEncryption.decryptAsString(DebugDataEncryption.java:54)
at org.dependencytrack.tasks.NistApiMirrorTask.lambda$inform$1(NistApiMirrorTask.java:120)
at java.base/java.util.Optional.map(Unknown Source)
at org.dependencytrack.tasks.NistApiMirrorTask.inform(NistApiMirrorTask.java:118)
at alpine.event.framework.BaseEventService.lambda$publish$0(BaseEventService.java:110)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
2025-01-17 13:14:19,777 WARN [NistApiMirrorTask] No API key configured; Aggressive rate limiting to be expected
Any solutions?
@konstantin-921 First, have you ensured that you persist the data directory, as mentioned earlier in this thread?
Second, have you done what the log message asks you to do?
To verify the latter, please check the respective value in the database. For comparison, the base64-encoded value for which decryption was attempted has a length of 88, and a sha256 digest of 3a6abb59e6babd2240583174aa0c736b21ef3ed223b54dc8497090c0c2f83d89.
To get the value of interest from the database:
select "PROPERTYVALUE"
from "CONFIGPROPERTY"
where "GROUPNAME" = 'vuln-source'
and "PROPERTYNAME" = 'nvd.api.key';
That value should be in Base64 encoded format. You can put this value into this CyberChef recipe to calculate its SHA-256 digest: https://gchq.github.io/CyberChef/#recipe=SHA2('256',64,1)&input=eW91ciB2YWx1ZSBoZXJl
The output of the recipe should be exactly 3a6abb59e6babd2240583174aa0c736b21ef3ed223b54dc8497090c0c2f83d89. If it is the same, we can rule out that the key was corrupted in the database. Then it has to be an in-memory corruption.
@nscuro Thanks for the reply! I checked the configurations, it looks like the /data directory is not persistent. I will fix it.
Also, I will check the data in DB
Thanks @nscuro! It works after following steps:
- Enable persistentVolume for apiServer (helm chart)
- Redeploy StatefullSet
- Re-add API keys via UI
- Test by redeploy and checking with help of provided DB command
We have v4.10.1 running in Docker with a persisted volume mounted to /data as by the documentation.
We still got the following today after a restart:
2025-03-11 11:47:33,276 ERROR [OssIndexAnalysisTask] An error occurred decrypting the OSS Index API Token. Skipping
javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
at java.base/com.sun.crypto.provider.CipherCore.unpad(Unknown Source)
at java.base/com.sun.crypto.provider.CipherCore.fillOutputBuffer(Unknown Source)
at java.base/com.sun.crypto.provider.CipherCore.doFinal(Unknown Source)
at java.base/com.sun.crypto.provider.AESCipher.engineDoFinal(Unknown Source)
at java.base/javax.crypto.Cipher.doFinal(Unknown Source)
at alpine.security.crypto.DataEncryption.decryptAsBytes(DataEncryption.java:136)
at alpine.security.crypto.DataEncryption.decryptAsBytes(DataEncryption.java:148)
at alpine.security.crypto.DataEncryption.decryptAsString(DataEncryption.java:176)
at org.dependencytrack.tasks.scanners.OssIndexAnalysisTask.inform(OssIndexAnalysisTask.java:140)
at org.dependencytrack.tasks.VulnerabilityAnalysisTask.performAnalysis(VulnerabilityAnalysisTask.java:162)
at org.dependencytrack.tasks.VulnerabilityAnalysisTask.analyzeComponents(VulnerabilityAnalysisTask.java:121)
at org.dependencytrack.tasks.VulnerabilityAnalysisTask.inform(VulnerabilityAnalysisTask.java:70)
at alpine.event.framework.BaseEventService.lambda$publish$0(BaseEventService.java:110)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
According to Portainer the volume was created 2023-09-15 09:39:26, so definetly persisted.
Edit: I do see the following entries on startup:
2025-03-11 11:01:13,983 INFO [KeyManager] Generating new key pair
2025-03-11 11:01:14,907 INFO [KeyManager] Saving key pair
Checking if there are multiple ones, I found the following:
Could it be that (despite the documentation) it's still trying the user directory and not the /data folder?
We do it as desribed in the documentation: https://docs.dependencytrack.org/getting-started/deploy-docker/#docker-compose-automated--orchestration
This is the startup information:
2025-03-11 11:10:03,931 [] INFO [alpine.Config] --------------------------------------------------------------------------------
2025-03-11 11:10:03,933 [] INFO [alpine.Config] OS Name: Linux
2025-03-11 11:10:03,933 [] INFO [alpine.Config] OS Version: 4.18.0-553.42.1.el8_10.x86_64
2025-03-11 11:10:03,933 [] INFO [alpine.Config] OS Arch: amd64
2025-03-11 11:10:03,933 [] INFO [alpine.Config] CPU Cores: 4
2025-03-11 11:10:03,935 [] INFO [alpine.Config] Max Memory: 6.2 GB (6,672,089,088.0 bytes)
2025-03-11 11:10:03,935 [] INFO [alpine.Config] Java Vendor: Eclipse Adoptium
2025-03-11 11:10:03,937 [] INFO [alpine.Config] Java Version: 21.0.1+12-LTS
2025-03-11 11:10:03,938 [] INFO [alpine.Config] Java Home: /opt/java/openjdk
2025-03-11 11:10:03,938 [] INFO [alpine.Config] Java Temp: /tmp
2025-03-11 11:10:03,938 [] INFO [alpine.Config] User: root
2025-03-11 11:10:03,938 [] INFO [alpine.Config] User Home: /root
2025-03-11 11:10:03,938 [] INFO [alpine.Config] --------------------------------------------------------------------------------
2025-03-11 11:10:03,938 [] INFO [alpine.Config] Initializing Configuration
2025-03-11 11:10:03,939 [] INFO [alpine.Config] System property alpine.application.properties not specified
2025-03-11 11:10:03,939 [] INFO [alpine.Config] Loading application.properties from classpath
2025-03-11 11:10:03,943 [] INFO [alpine.Config] --------------------------------------------------------------------------------
2025-03-11 11:10:03,944 [] INFO [alpine.Config] Application: Dependency-Track
2025-03-11 11:10:03,944 [] INFO [alpine.Config] Version: 4.10.1
2025-03-11 11:10:03,944 [] INFO [alpine.Config] Built-on: 2023-12-19T10:56:01Z
2025-03-11 11:10:03,944 [] INFO [alpine.Config] --------------------------------------------------------------------------------
2025-03-11 11:10:03,944 [] INFO [alpine.Config] Framework: Alpine
2025-03-11 11:10:03,944 [] INFO [alpine.Config] Version : 2.2.4
2025-03-11 11:10:03,944 [] INFO [alpine.Config] Built-on: 2023-12-06T15:54:14Z
2025-03-11 11:10:03,944 [] INFO [alpine.Config] --------------------------------------------------------------------------------
When inspecting the container, I can see the following:
...json
"Mounts": [
{
"Type": "volume",
"Name": "dependencytrack_dependency-track",
"Source": "/var/lib/docker/volumes/dependencytrack_dependency-track/_data",
"Destination": "/data",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
...
"Config": {
...
"Env": [
"ALPINE_OIDC_ENABLED=true",
"ALPINE_HTTP_PROXY_ADDRESS=<redacted>",
"ALPINE_OIDC_TEAMS_CLAIM=groups",
"ALPINE_DATABASE_MODE=external",
"ALPINE_DATABASE_DRIVER=com.microsoft.sqlserver.jdbc.SQLServerDriver",
"ALPINE_OIDC_USER_PROVISIONING=true",
"ALPINE_DATABASE_USERNAME=<redacted>",
"ALPINE_OIDC_ISSUER=<redacted>",
"ALPINE_METRICS_ENABLED=true",
"ALPINE_OIDC_CLIENT_ID=<redacted>",
"ALPINE_OIDC_TEAM_SYNCHRONIZATION=true",
"ALPINE_HTTP_PROXY_PORT=80",
"ALPINE_DATABASE_PASSWORD=<redacted>",
"ALPINE_DATABASE_URL=<redacted>",
"ALPINE_OIDC_USERNAME_CLAIM=preferred_username",
"PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"TZ=Etc/UTC",
"LOGGING_LEVEL=INFO",
"JAVA_OPTIONS=-XX:+UseParallelGC -XX:MaxRAMPercentage=90.0",
"EXTRA_JAVA_OPTIONS=",
"CONTEXT=/",
"WAR_FILENAME=dependency-track-apiserver.jar",
"JAVA_HOME=/opt/java/openjdk",
"LANG=C.UTF-8",
"HOME=/data/",
"DEFAULT_TEMPLATES_OVERRIDE_ENABLED=false",
"DEFAULT_TEMPLATES_OVERRIDE_BASE_DIRECTORY=/data/",
"LOGGING_CONFIG_PATH=logback.xml"
So I would assume that /data is used as home (which it isn't according to the logs).
The issue on our end was that we had to create a child image that contains some zScaler certificates. We forgot to switch the user back to UID 1000. After doing that (either in the image or by specifying it on the docker compose file) fixed the issue as it then used the correct home folder. Maybe this can shed some light on the issue as most affected seem to have used docker.
We will for the moment only mount an adjusted cacerts file instead of using a child image to make adjusting versions easier. Using multi-stage builds would also be an option to safely get upstream changes in cacerts without switching the user.
Hi, I'm also experiencing this issue. I'm deploying DT 4.12.7 along with postgres (not configuring any PV). I tried to configure an API key in the web interface for the first time and got the issue:
025-03-20 12:34:19,826 WARN [DataEncryption] Failed to decrypt value, possibly because it has been encrypted with a different key, or the encrypted value is corrupted. To verify the latter, please check the respective value in the database. For comparison, the base64-encoded value for which decryption was attempted has a length of 88, and a sha256 digest of db14fe0133d645e7f5e0252d098202bf28b667d8a0f9ba434917cf756e1ed6d1. If this is different to what's stored in your database, please report this to https://github.com/DependencyTrack/dependency-track/issues/2366, so the root cause can be identified.
2025-03-20 12:34:19,826 WARN [NistApiMirrorTask] Failed to decrypt API key; Continuing without authentication
javax.crypto.BadPaddingException: Given final block not properly padded. Such issues can arise if a bad key is used during decryption.
at java.base/com.sun.crypto.provider.CipherCore.unpad(Unknown Source)
at java.base/com.sun.crypto.provider.CipherCore.fillOutputBuffer(Unknown Source)
at java.base/com.sun.crypto.provider.CipherCore.doFinal(Unknown Source)
at java.base/com.sun.crypto.provider.AESCipher.engineDoFinal(Unknown Source)
at java.base/javax.crypto.Cipher.doFinal(Unknown Source)
at alpine.security.crypto.DataEncryption.decryptAsBytes(DataEncryption.java:136)
at alpine.security.crypto.DataEncryption.decryptAsBytes(DataEncryption.java:148)
at alpine.security.crypto.DataEncryption.decryptAsString(DataEncryption.java:176)
at org.dependencytrack.util.DebugDataEncryption.retryDecryptAsString(DebugDataEncryption.java:59)
at org.dependencytrack.util.DebugDataEncryption.decryptAsString(DebugDataEncryption.java:54)
at org.dependencytrack.tasks.NistApiMirrorTask.lambda$inform$1(NistApiMirrorTask.java:120)
at java.base/java.util.Optional.map(Unknown Source)
at org.dependencytrack.tasks.NistApiMirrorTask.inform(NistApiMirrorTask.java:118)
at alpine.event.framework.BaseEventService.lambda$publish$0(BaseEventService.java:110)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
2025-03-20 12:34:19,829 WARN [NistApiMirrorTask] No API key configured; Aggressive rate limiting to be expected
I've tested my API key and I can confirm that it's valid
@tesence Can you check your User Home in startup information if the logs please
@tesence Can you check your User Home in startup information if the logs please
Hello, sure it's apparently /data/
2025-03-20 12:35:26,727 INFO [Config] --------------------------------------------------------------------------------
2025-03-20 12:35:26,728 INFO [Config] OS Name: Linux
2025-03-20 12:35:26,728 INFO [Config] OS Version: 4.18.0-425.13.1.el8_7.x86_64
2025-03-20 12:35:26,728 INFO [Config] OS Arch: amd64
2025-03-20 12:35:26,728 INFO [Config] CPU Cores: 4
2025-03-20 12:35:26,732 INFO [Config] Max Memory: 7.1 GB (7,635,730,432.0 bytes)
2025-03-20 12:35:26,732 INFO [Config] Java Vendor: Eclipse Adoptium
2025-03-20 12:35:26,733 INFO [Config] Java Version: 21.0.6+7-LTS
2025-03-20 12:35:26,733 INFO [Config] Java Home: /opt/java/openjdk
2025-03-20 12:35:26,733 INFO [Config] Java Temp: /tmp
2025-03-20 12:35:26,733 INFO [Config] User: dtrack
2025-03-20 12:35:26,733 INFO [Config] User Home: /data/
2025-03-20 12:35:26,734 INFO [Config] --------------------------------------------------------------------------------
2025-03-20 12:35:26,734 INFO [Config] Initializing Configuration
2025-03-20 12:35:26,734 INFO [Config] System property alpine.application.properties not specified
2025-03-20 12:35:26,734 INFO [Config] Loading application.properties from classpath
2025-03-20 12:35:26,739 INFO [Config] --------------------------------------------------------------------------------
2025-03-20 12:35:26,739 INFO [Config] Application: Dependency-Track
2025-03-20 12:35:26,739 INFO [Config] Version: 4.12.7
2025-03-20 12:35:26,740 INFO [Config] Built-on: 2025-03-12T22:23:17Z
2025-03-20 12:35:26,740 INFO [Config] --------------------------------------------------------------------------------
2025-03-20 12:35:26,740 INFO [Config] Framework: Alpine
2025-03-20 12:35:26,740 INFO [Config] Version : 3.1.2
2025-03-20 12:35:26,740 INFO [Config] Built-on: 2024-12-03T15:45:44Z
2025-03-20 12:35:26,740 INFO [Config] --------------------------------------------------------------------------------
is the API key actually stored here ? which would mean that it's lost when it restarts ?
The API key itself is stored encrypted in the database. But the key that it is encrypted with is (for dockerized instances) located at /data/.dependency-track/keys/secret.key. If /data is not persisted, recreating the container would generate a new key, making decryption impossible and leading to this error.
Are you using a containerized variant? If yes, did you ensure the folder is persisted using a volume mount?