Extension signing job not triggered in self-hosted OpenVSX
I'm running a self-hosted OpenVSX instance on Kubernetes with extension data stored in a persistent volume. I'm trying to enable extension signing via:
ovsx: integrity: key-pair: create
org: jobrunr: job-scheduler: enabled: true background-job-server: enabled: true worker-count: 2 dashboard: enabled: false database: type: sql miscellaneous: allow-anonymous-data-usage: false
Important: Mirror mode is explicitly disabled and cannot be enabled in our environment for security reasons, same as in issue #1156. That issue seems to describe the same root problem, but it does not have a solution, so I’m opening this new issue to provide more context and hopefully help resolve it.
What I observe:
- No log messages indicating signature generation (e.g., from ExtensionVersionSignatureJobRequestHandler) appear.
- No entries are written into the signature_key_pair database table.
- No .sigzip files are created in the persistent volume where extensions are stored.
- No signature-related jobs appear in the job_request table.
- Some extensions return the message Integrity service is enabled, but ... did not have an active key pair.
What I already tried:
- Set key-pair: create and renew
- Restarted backend pod
- Republished extensions after enabling signing
- Confirmed .sigzip files are not created
- Verified extensions exist in the persistent volume
- Verified the signature_key_pair table is empty
- Inspected job_request table — no relevant jobs present
- Dropped and restored the database prior to republishing
- Verified extensions show up correctly in the UI and VS Code
My questions:
- Can you clarify under what conditions the signing job is actually triggered?
- Is there a requirement to clean the persistent volume in addition to the database pv?
- Are there known cases where the signing is skipped or blocked silently?
- Would switching to key-pair: renew would be faster to make this work assuming this will always try to generate signatures?
Thank you!
Do you have a migration delay configured (ovsx.migrations.delay.seconds)?
Migrations only run once per version. You can bump the version or drop the jobrunr_* tables.
https://github.com/eclipse/openvsx/blob/69da93f296702578b1f9cabb298f1ce0eca700d9/server/src/main/java/org/eclipse/openvsx/migration/ScheduleMigrationsListener.java#L43-L49
https://github.com/eclipse/openvsx/blob/69da93f296702578b1f9cabb298f1ce0eca700d9/server/src/main/java/org/eclipse/openvsx/migration/MigrationScheduler.java#L40-L42
Thank you so much, this worked. I dropped all the tables regarding the jobs and restarted the server. This did the trick, I can see the .sigzip files generated in the persistent volume and stored in the correct location. And I no longer get the "This extension is not sign by ..."
However, now, in Visual Studio Code, I consistently get this warning message when trying to install extensions: “Cannot install extension because Visual Studio Code cannot verify the extension signature – Signature verification failed with 'UnhandledException' error.”
This error disappears only when I set the following in settings.json: "extensions.verifySignature": false
I get the same behaviour with the public openvsx registry, from you guys, as well.
My questions are:
- Is it expected that VS Code cannot verify the signature generated by a self-hosted OpenVSX instance or the public one?
- Is there any official way to make VS Code trust the internal OpenVSX signature (e.g. by importing a key or certificate), or is disabling signature verification the recommended approach for this?
Thank you for all your help!
@ErikaMolnar You should use VSCodium, it uses https://www.npmjs.com/package/node-ovsx-sign. VS Code has its own proprietary signature verification module.
Hi, can you please display the whole yamls of your deployment? I have faced a similar error
Hello,
Unfortunately, I can’t share the full YAML files since they were written and deployed internally within my company, so I’d rather not post them publicly.
However, here’s what I did to resolve the issue:
-
I connected to the database pod and manually dropped the job runner tables using: DROP TABLE <all jobrunr* schemas separated by ","> CASCADE;
-
In the application.yaml, I set: ovsx.integrity.key-pair: renew
-
After that, I restarted the backend server. kubectl -n
rollout restart deployment <ovsx_server>
After applying these changes, the signing process started working correctly.
If you’d like, you can share your YAML files (or at least the relevant parts) here, and I can take a look to see if there’s anything that might cause the issue.
Have a nice day, Erika M.