Large VSIX (> 300 MB) published as inactive due to backend Java heap error
Hello
I saw that when publishing larger VSIX files (around 300 MB or more), the extension version is created but stays inactive and therefore isn’t visible in the registry.
After some investigation:
- The extension appears in the database with active = false.
- In the backend logs I see a java.lang.OutOfMemoryError: Java heap space during PublishExtensionVersionHandler.publishAsync.
- The process never reaches the final activateExtension(...) call, so the version remains inactive.
When I increased the JVM heap for the backend container to 2 GB ( _JAVA_OPTIONS=-Xms2g -Xmx2g, 1 GB was not enough), publishing the same VSIX started working and the version became active as expected.
So currently the only workaround I found is to drastically increase the heap size of the backend for large extensions.
Env OpenVSX version: 0.27.0 (self-hosted) JVM: OpenJDK 17 (Temurin) Backend heap: 1G → fails with Java heap space on ~300 MB VSIX 2G → succes
Question Is this behavior expected for large VSIX files? Would you consider optimizing the publishing so that large extensions don’t require such a large heap (or at least fail instead of leaving the version inactive for such errors, because the CLI message was misleading, saying that it was published successfully)? Looked into the other releases after v0.27.0 and I did not see anything related to this in the change logs.
that is indeed something to look into. could you provide a link to the vsix file for testing?
@netomi Thank you for your reply.
The extension you can test with is hyorman.ragnarok, you can find it here
Here are the versions for this extension that were not successfully published because of the size. I personally tested with versions 0.2.5 and 0.2.6, and both VSIX files are around 304.6 MB; they initially failed due to the heap size issue until I increased the backend heap to 2 GiB.
$ select id, version, active from extension_version where extension_id = 556;
id | version | active ------+---------+-------- 2996 | 0.2.5 | f 3030 | 0.1.5 | f 3031 | 0.2.4 | f 3032 | 0.1.4 | f 2997 | 0.2.3 | f 3033 | 0.0.3 | t 2998 | 0.2.1 | t 3034 | 0.0.2 | t 3035 | 0.1.1 | t 3036 | 0.2.0 | t 2999 | 0.2.6 | f
So when uploading an extension for publication, it is checked if the file exceeds the maximum allowed size:
https://github.com/eclipse/openvsx/blob/master/server/src/main/java/org/eclipse/openvsx/ExtensionService.java#L111-L112
unfortunately that is done in a way by buffering the whole file in memory, which is not such a good idea.
will prepare a PR to fix that.
could you take a look at #1458 and check if it solves your problem?
I noticed a follow up problem with browsers that do not correctly display the 413 and go in an retry loop instead, but that should be a separate fix.
Thanks for the quick reply and PR!
I’ll test it after my vacation, but I just wanted to give additional info on where my failure happens, in case it’s a separate issue.
In my case the VSIX files are around 304.6MB, so they are below MAX_CONTENT_SIZE = 512 * 1024 * 1024 (512MB) and should pass the content-size check.
So the OOM occurs during the signature/checksum phase in publishAsync, not inside createExtensionFile.
From my understanding, #1458 optimizes the MAX_CONTENT_SIZE validation (and avoids buffering the whole upload for that check), which is great, but my specific OOM seems to happen later in the pipeline. I’ll still try #1458 and report back if the behavior changes, but I suspect there may be a second problem.
Stack Trace:
`2025-12-01T09:56:16.983Z ERROR [openvsx-server,,] 1 --- [openvsx-server] [ task-9] .a.i.SimpleAsyncUncaughtExceptionHandler : Unexpected exception occurred invoking async method: public void org.eclipse.openvsx.publish.PublishExtensionVersionHandler.publishAsync(org.eclipse.openvsx.util.TempFile,org.eclipse.openvsx.ExtensionService)
java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.Arrays.copyOf(Arrays.java:3537) ~[na:na]
at java.base/java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:100) ~[na:na]
at java.base/java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:130) ~[na:na]
at org.bouncycastle.crypto.signers.Ed25519Signer.update(Unknown Source) ~[bcprov-jdk18on-1.80.jar:na]
at org.eclipse.openvsx.publish.ExtensionVersionIntegrityService.createSignatureFile(ExtensionVersionIntegrityService.java:149) ~[classes/:na]
at org.eclipse.openvsx.publish.ExtensionVersionIntegrityService.generateSignature(ExtensionVersionIntegrityService.java:116) ~[classes/:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na]
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na]
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:355) ~[spring-aop-6.1.19.jar:6.1.19]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:717) ~[spring-aop-6.1.19.jar:6.1.19]
at org.eclipse.openvsx.publish.ExtensionVersionIntegrityService$$SpringCGLIB$$0.generateSignature(
ty for the stacktrace, the used Ed25519Signer actually maintains an in-memory buffer of the data to be signed. Looking at ways to improve on that.
Fixing this is not obvious, we might be able to use openssl cli to do the signing for us instead of loading the file into memory for signing.
Do you need repository signing of the extension files? Its use is unclear as consumers will not be aware of the public key used to sign the extensions and thus can not really verify if this package is coming from a trusted source.
You can disable the integrity service by leaving the parameter ovsx.integrity.key-pair blank.
Thank you for looking into this!
In our setup, we do actually rely on signing. We run a self-hosted OpenVSX instance and we need to generate .sigzip files for our internal/in-house extensions and the externally synced approved extensions. Even though we currently have verifySignature set to false in our customized VS Code build, we still want the signatures to be produced because otherwise VSCode throws errors if the VSIX file does not have a .sigzip associated and we can't install extensions at all from our OVSX marketplace.
Because of that, disabling the integrity service via ovsx.integrity.key-pair is not an option for us. For now, we’ve increased the backend heap to 2 GB, which makes publishing our ~304.6 MB VSIX work ok, but I'm worried in the future for bigger extensions if the 2 GB will not be enough. Also, I think it's good to have all features working as expected in case someone really needs it like in our case, maybe.
If you experiment with an alternative implementation (e.g. streaming or using openssl CLI) I’m happy to test it with our larger extensions and report back.
ty for the background, that is useful to know.
The feature works just you need to be aware that signing is currently fully done in memory. I did a search about streaming alternatives, and this does not really exists afaict.
There is an ED25519 prehash signer that you could use for streaming the content rather than loading everything in memory, however that one is cryptographically weaker than the "normal" variant that we currently use. It might not be relevant and would be a viable option for some though if it can be configured what signer to use.
Just to check that I understood correctly.
One possible solution would be to sign the VSIX after it has been written to storage (local file / S3 / etc.) by calling an external signer (e.g. openssl or a dedicated helper) that reads the file from disk and returns the Ed25519 signature bytes, but this is more complex to do. I think that would keep the current Ed25519 signature format, but move the heavy work out of the JVM so large files don’t hit the heap limit.
I also understand your point about the pre-hash variant (Ed25519ph) being weaker than the current solution. That’s why I’m mainly wondering whether an external signer with the same Ed25519 scheme is a realistic option, or if other constraints make this approach difficult.