Bug Report: HTTP2 Protocol Errors when loading many static files/FISH-10632
Brief Summary
When updating to recent versions of Payara 6.x and attempting HTTP2, I've observed HTTP2 protocol errors in one of our main applications, which uses HTML, static JS files, and plenty of XHR. I've seen existing tickets (#2625, #5268) deal with similar topics but not this one. It took me a while to get time to make a reproducer but I have made one now.
I believe this is tied to the file cache being turned on, from the trace, which is new to me to see, maybe in the micro environment it logs differently than in the full environment.
Expected Outcome
Page reloads without any issues loading static content when HTTP2 is enabled
Current Outcome
Periodic HTTP2 Protocol Errors, like in this screenshot:
I don't know the exact moment this stack trace came in but it's likely relevant:
[2025-02-19T11:44:40.751-0600] [] [SEVERE] [AS-WEB-CORE-00037] [jakarta.enterprise.web.core] [tid: _ThreadID=160 _ThreadName=http-thread-pool::https-listener(8)] [timeMillis: 1739987080751] [levelValue: 1000] [[
An exception or error occurred in the container during the request processing
java.lang.ArrayIndexOutOfBoundsException: Index 139 out of bounds for length 133
at org.apache.naming.resources.ResourceCache.lookup(ResourceCache.java:337)
at org.apache.naming.resources.ProxyDirContext.cacheLookup(ProxyDirContext.java:1451)
at org.apache.naming.resources.ProxyDirContext.lookup(ProxyDirContext.java:274)
at org.apache.catalina.core.ContextsAdapterUtility$NamingContextAdapter.lookup(ContextsAdapterUtility.java:165)
at org.glassfish.grizzly.http.server.util.Mapper.internalMapWrapper(Mapper.java:1184)
at org.glassfish.grizzly.http.server.util.Mapper.internalMap(Mapper.java:956)
at org.glassfish.grizzly.http.server.util.Mapper.map(Mapper.java:793)
at org.apache.catalina.connector.CoyoteAdapter.postParseRequest(CoyoteAdapter.java:520)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:310)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:239)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:520)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:217)
at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:190)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:535)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:515)
at java.base/java.lang.Thread.run(Thread.java:1583)
]]
Reproducer
You can reproduce this by building the war in this repo and deploying with the preboot file and commandline provided.
https://github.com/marcreichman-pfi/payara-http2-connection-trouble-repro
Instructions will also show up when you load the URL with the index.html
Operating System
Windows 10 Pro / WSL2 / Linux Docker / Windows Server 2022 (seen in several places over the years)
JDK Version
Oracle JDK 21.0.5
Payara Distribution
Payara Micro
Just a hunch, but since HTTP2 requests tend to come in for multiple files at the "exact" same time due to the multiplexing design, it may just be exposing some kind of thread safety issue in the file caching. I don't see many concurrent classes in that code nor any locks or synchronization. Happy to test more.
Additional log content I missed the first time:
[2025-02-19T13:33:57.725-0600] [] [WARNING] [] [org.glassfish.grizzly.http2.Http2SessionOutputSink] [tid: _ThreadID=191 _ThreadName=http-thread-pool::https-listener(39)] [timeMillis: 1739993637725] [levelValue: 900] UNEXPECTED NULL RECORD. Queue-size: 73,818 byteToTransfer=0 queueSizeToFree=0 queueSize=73,818
[2025-02-19T13:33:57.727-0600] [] [WARNING] [] [org.glassfish.grizzly.http2.Http2SessionOutputSink] [tid: _ThreadID=194 _ThreadName=http-thread-pool::https-listener(42)] [timeMillis: 1739993637727] [levelValue: 900] UNEXPECTED NULL RECORD. Queue-size: 147,636 byteToTransfer=0 queueSizeToFree=0 queueSize=147,636
[2025-02-19T13:33:57.728-0600] [] [WARNING] [] [org.glassfish.grizzly.http2.Http2SessionOutputSink] [tid: _ThreadID=195 _ThreadName=http-thread-pool::https-listener(43)] [timeMillis: 1739993637728] [levelValue: 900] UNEXPECTED NULL RECORD. Queue-size: 221,454 byteToTransfer=0 queueSizeToFree=0 queueSize=221,454
[2025-02-19T13:33:57.729-0600] [] [WARNING] [] [org.glassfish.grizzly.http2.Http2SessionOutputSink] [tid: _ThreadID=174 _ThreadName=http-thread-pool::https-listener(22)] [timeMillis: 1739993637729] [levelValue: 900] UNEXPECTED NULL RECORD. Queue-size: 295,272 byteToTransfer=0 queueSizeToFree=0 queueSize=295,272
Here are a few more details:
- I noticed if I leave the file cache in the default setting, which is supposed to be OFF, it still tends to happen. I tried then turning the file cache ON but zeroing out all the settings, and it seemed to reduce the issue quite a bit, but not totally and thus I cannot consider it a workaround. This screenshot from Payara full is just for reference:
- When I tested some more on a payara full instance (not the micro), I got these when I saw the
ERR_HTTP2_PROTOCOL_ERROR:
[2025-02-21T09:25:45.514-0600] [Payara 6.2025.1] [WARNING] [] [jakarta.enterprise.web] [tid: _ThreadID=179 _ThreadName=http-thread-pool::http-listener-2(16)] [timeMillis: 1740151545514] [levelValue: 900] [[
StandardWrapperValve[default]: Servlet.service() for servlet default threw exception
java.io.IOException: Connection is closed
at org.glassfish.grizzly.nio.NIOConnection.assertOpen(NIOConnection.java:420)
at org.glassfish.grizzly.http.io.OutputBuffer.write(OutputBuffer.java:613)
at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:340)
at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:327)
at org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:158)
at org.apache.catalina.servlets.DefaultServlet.copy(DefaultServlet.java:2103)
at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:989)
at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:510)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:527)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:614)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1554)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:259)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:166)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:757)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:577)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:158)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:372)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:239)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:520)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:217)
at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:190)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:535)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:515)
at java.base/java.lang.Thread.run(Thread.java:1583)
Caused by: java.io.IOException: Locally closed
]]
It's possible that whether the issue is a caching thread safety bug (and possibly an enabled-state bug?) or something is overloaded connection-wise (I've tested this on different OSes so it's possible that some network stacks behave differently), when the connection unexpectedly drops in Glassfish, the browser shows it as this error?
Unfortunately it happens organically just due to normal static content loads on a web app (js files, css files, etc.) so even though the extreme case makes it happen MORESO, it still happens enough that I cannot use HTTP2 in normal operation with default settings.
Is anyone able to look at this?
Hello @marcreichman-pfi,
First of all thank you very much for your patience and for reporting this bug! With your reproducer and a little bit of additional configuration (to get the Payara Micro to not throw an empty response when accessing the application via port 8181), I managed to trigger the intial error:
[2025-02-26T15:24:51.747+0000] [] [WARNING] [] [jakarta.enterprise.web] [tid: _ThreadID=198 _ThreadName=http-thread-pool::https-listener(49)] [timeMillis: 1740583491747] [levelValue: 900] [[
StandardWrapperValve[default]: Servlet.service() for servlet default threw exception
java.lang.ArrayIndexOutOfBoundsException: Index 93 out of bounds for length 89
at org.apache.naming.resources.ResourceCache.lookup(ResourceCache.java:337)
at org.apache.naming.resources.ProxyDirContext.cacheLookup(ProxyDirContext.java:1451)
at org.apache.naming.resources.ProxyDirContext.lookupCache(ProxyDirContext.java:1384)
at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:812)
at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:510)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:527)
at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:614)
at org.apache.catalina.core.StandardWrapper.service(StandardWrapper.java:1554)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:259)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:166)
at org.apache.catalina.core.StandardPipeline.doInvoke(StandardPipeline.java:757)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:577)
at com.sun.enterprise.web.WebPipeline.invoke(WebPipeline.java:99)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:158)
at org.apache.catalina.connector.CoyoteAdapter.doService(CoyoteAdapter.java:372)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:239)
at com.sun.enterprise.v3.services.impl.ContainerMapper$HttpHandlerCallable.call(ContainerMapper.java:520)
at com.sun.enterprise.v3.services.impl.ContainerMapper.service(ContainerMapper.java:217)
at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:190)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:535)
at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:515)
at java.base/java.lang.Thread.run(Thread.java:1583)
]]
Additionally, after a few more refreshes, I managed to get the errors you metioned in the follow-up answer:
[2025-02-26T15:30:47.945+0000] [] [WARNING] [] [org.glassfish.grizzly.http2.Http2SessionOutputSink] [tid: _ThreadID=168 _ThreadName=http-thread-pool::https-listener(19)] [timeMillis: 1740583847945] [levelValue: 900] UNEXPECTED NULL RECORD. Queue-size: 5 byteToTransfer=0 queueSizeToFree=0 queueSize=5
[2025-02-26T15:30:47.954+0000] [] [WARNING] [] [org.glassfish.grizzly.http2.Http2SessionOutputSink] [tid: _ThreadID=182 _ThreadName=http-thread-pool::https-listener(33)] [timeMillis: 1740583847954] [levelValue: 900] UNEXPECTED NULL RECORD. Queue-size: 10 byteToTransfer=0 queueSizeToFree=0 queueSize=10
The issue has been escalated to Platform Development team under the codename FISH-10632. Thank you again for pointing this bug out and for providing a lot of context along with the reproducer!
Best regards, Felix Ifrim
Hi @felixif - just curious if there's been any eyeballs on this one? I realize this is community level support, but this seems to be a limiting factor in adoption of HTTP2 in payara
I second @marcreichman-pfi's question. Is somebody even willing to work on this? If not, can you please share the reason? 🤔
Hello @felixif - Respectfully, does this seem like it will get any traction? I'm not sure if it's in the payara-side caching code, or somewhere in the grizzly code, but I personally cannot see ever using http2 with these errors. I'm surprised more folks are not running into it.
We recently updated to EE 10 / Payara 6 for http/2 / TLS 1.3, but we returned this error within a few page clicks, with the file cache disabled.
We tested Glassfish 7 today which returns a similar error:
UNEXPECTED NULL RECORD. Queue-size: 16,384 byteToTransfer=0 queueSizeToFree=0 queueSize=16,384
I don't know why this isn't a bigger issue for others, but maybe we aren't typical Payara users, requiring modern website features. For us I think the answer is to front our website with Apache with Payara in the background.
I always thought this was a Grizzly issue. This just proves it. Maybe Payara should switch to Helidon NIMA because I am not sure how long oracle is going to maintain grizzly, if they are still maintaining it at all
I front Payara with haproxy and don't have any issues. This is my "workaround" for this.
We recently updated to EE 10 / Payara 6 for http/2 / TLS 1.3, but we returned this error within a few page clicks, with the file cache disabled.
We tested Glassfish 7 today which returns a similar error:
UNEXPECTED NULL RECORD. Queue-size: 16,384 byteToTransfer=0 queueSizeToFree=0 queueSize=16,384
I don't know why this isn't a bigger issue for others, but maybe we aren't typical Payara users, requiring modern website features. For us I think the answer is to front our website with Apache with Payara in the background.
haproxy looks ideal thanks for the tip 👍
Message ID: @.***>
See https://github.com/lprimak/infra/blob/main/etc/haproxy.cfg for sample config
I did just test the latest 2025.7, stock config with http2 on, file cache on or off, file cache on and all 0s, and this is the same. At the moment we cannot use a proxy layer so we're just on http1.
Doesn't look like Grizzly has been maintained by Oracle lately. Looks like an effort from OmniFish, Payara and some other contributors lately. Certainly no heavy bug fixes like this one.