tomcat
tomcat copied to clipboard
Refuse adding invalid HTTP 2.0 headers
Connection headers like Connection: keep-alive
are invalid in HTTP/2, and some clients (like Safari or curl) are very touchy about it.
When an application component adds the typical HTTP/1.x "Connection: keep-alive" header to the response, despite the component's good intention, the header is faulty in HTTP/2.0 and SHOULD always be filtered.
The current implementation emits a warning in the logs only once per instance.
Can you show your sample curl input/output?
@michael-o curl example:
$ curl --http2 -Nv "https://some.http2.url/"
* Trying 127.0.0.1:8833...
* TCP_NODELAY set
* Connected to some.http2.url (127.0.0.1) port 8833 (#0)
> GET / HTTP/1.1
> Host: some.http2.url:8833
> User-Agent: curl/7.65.3
> Connection: Upgrade, HTTP2-Settings
> Upgrade: h2c
> HTTP2-Settings: AAMAAABkAARAAAAAAAIAAAAA
> Accept:text/plain
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 101
< Connection: Upgrade
< Upgrade: h2c
< Date: Mon, 20 Apr 2020 17:48:26 GMT
* Received 101
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=15
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
* http2 error: Invalid HTTP header field was received: frame type: 1, stream: 1, name: [connection], value: [keep-alive]
* HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
* stopped the pause stream!
* Connection #0 to host some.http2.url left intact
curl: (92) HTTP/2 stream 0 was not closed cleanly: PROTOCOL_ERROR (err 1)
Safari will display the following error in the javascript console:
[Error] Failed to load resource: The operation couldnt be completed. Protocol error
Of course, you will need to actually add the faulty header to the response to see this error.
I'm curious why applications think they need to set the connection header. I'd expect the container to handle this. Further, applications that want to set this header can/should use ServletRequest.getProtocol()
first to check it is appropriate to do so.
If applied (and I'm not convinced it should be) the formatting of the PR needs fixing to be consistent with the Tomcat code. I'd also consider using UserDataHelper for the logging even though this isn't strictly a user data issue.
-1 Tomcat covers the most egregious cases, but this is probably not one of them. Even in HTTP/1.1, what the application does setting this connection header is nonsense and will never help. You will need to fix the application.
I fail to see this issue:
Pure:
$ curl --http2 -NvIsq http://localhost:8080
HTTP/1.1 101
Connection: Upgrade
Upgrade: h2c
Date: Mon, 20 Apr 2020 19:10:48 GMT
HTTP/2 200
content-type: text/html;charset=UTF-8
date: Mon, 20 Apr 2020 19:10:48 GMT
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> HEAD / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.63.0
> Accept: */*
> Connection: Upgrade, HTTP2-Settings
> Upgrade: h2c
> HTTP2-Settings: AAMAAABkAARAAAAAAAIAAAAA
>
< HTTP/1.1 101
< Connection: Upgrade
< Upgrade: h2c
< Date: Mon, 20 Apr 2020 19:10:48 GMT
* Received 101
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 200
< content-type: text/html;charset=UTF-8
< date: Mon, 20 Apr 2020 19:10:48 GMT
<
* Connection #0 to host localhost left intact
with explicit headers:
$ curl --http2 -NvIsq http://localhost:8080 -H "Connection: keep-alive"
HTTP/1.1 101
Connection: Upgrade, keep-alive
Upgrade: h2c
Date: Mon, 20 Apr 2020 19:11:45 GMT
Keep-Alive: timeout=20
HTTP/2 200
content-type: text/html;charset=UTF-8
date: Mon, 20 Apr 2020 19:11:45 GMT
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> HEAD / HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.63.0
> Accept: */*
> Connection: Upgrade, HTTP2-Settings
> Upgrade: h2c
> HTTP2-Settings: AAMAAABkAARAAAAAAAIAAAAA
> Connection: keep-alive
>
< HTTP/1.1 101
< Connection: Upgrade, keep-alive
< Upgrade: h2c
< Date: Mon, 20 Apr 2020 19:11:45 GMT
< Keep-Alive: timeout=20
* Received 101
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 200
< content-type: text/html;charset=UTF-8
< date: Mon, 20 Apr 2020 19:11:45 GMT
<
* Connection #0 to host localhost left intact
Running Tomcat 9.0.34 and
$ curl --version
curl 7.63.0 (x86_64-w64-mingw32) libcurl/7.63.0 OpenSSL/1.1.1a (WinSSL) zlib/1.2.11 libidn2/2.0.5 nghttp2/1.35.1
Release-Date: 2018-12-12
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet tftp
Features: AsynchDNS IDN IPv6 Largefile SSPI Kerberos SPNEGO NTLM SSL libz TLS-SRP HTTP2 HTTPS-proxy MultiSSL Metalink
Even with
C:\Users\mosipov>curl --version
curl 7.69.1 (x86_64-pc-win32) libcurl/7.69.1 OpenSSL/1.1.1f (Schannel) zlib/1.2.11 brotli/1.0.7 WinIDN libssh2/1.9.0 nghttp2/1.40.0
Release-Date: 2020-03-11
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: AsynchDNS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile MultiSSL NTLM SPNEGO SSL SSPI TLS-SRP brotli libz
Please ugrade your curl first!
Since it's now 2020, shouldn't it be doable to block any attempt to set the connection header by the application ?
That would also allow some clean up in the current code that sets the header and has to take account of any value that may have been set by the application.
@markt-asf This happens for instance in SSE components (tomcat does not provide such components), or more generally in any J2EE filters or servlets, coded before HTTP/2.0, which want to ensure that the connection is kept alive.
@rmaucher It means that you choose tomcat to let buggy server-side components break HTTP/2.0 streams. Other user agents do always ignore this faulty header, so what's the point in sending it in the first place? Helping to find bad client code by breaking more often? Of course, it's a strategy, but considering the vast amount of HTTP/1.1-specific code in the nature that is supposed to migrate to HTTP/2.0, it's better to just log an informative warning and get things straight. Or directly throw.
@michael-o I'm using a recent curl. There is a misunderstanding, here, I'm talking about a header sent by the server in the HTTP response.
Since it's now 2020, shouldn't it be doable to block any attempt to set the connection header by the application ? That would also allow some clean up in the current code that sets the header and has to take account of any value that may have been set by the application.
@rmaucher That would mean to throw the exception right away, it's enough to remove the try catch in this patch. Yes, certainly simpler.
If applied (and I'm not convinced it should be) the formatting of the PR needs fixing to be consistent with the Tomcat code.
@markt-asf Can you give me some pointers on the needed fixes?
Why can't I reproduce it although I have an h2c response?!
We are talking about this code block: https://github.com/apache/tomcat/blob/65bc61a466fdc4ccc0ee281bd411fca7cb11adb0/java/org/apache/coyote/http11/Http11Processor.java#L932-L956
I see. But at that point, you do not know who set up the header, so the stack trace won't be informative.
I see. But at that point, you do not know who set up the header, so the stack trace won't be informative.
Can you retry with a vanilla Tomcat and the most recent curl?
Can you retry with a vanilla Tomcat and the most recent curl?
I've spent enough time digging for this bug (to end up finding it in a SSE dependency), including step by step debugging in 9.0.33 sources (then checking it is still present with 9.0.34) and in the specs and the browser sources to not really need to retry anything to persuade myself of the pertinenece of this patch.
@arkanovicz
You can ignore requests to retest this. I was able to recreate the issue with curl using the steps you describe.
Could you expand the SSE acronym please. I want to make sure I understand you correctly.
Code format issues are opening braces {
should not be on a new line and multi-line comments either use //
before every line or, if the /* ... */
style is used each intermediate line starts with an aligned *
More generally...
It would be worth reviewing the HTTP/2 spec to check if there are any other headers that are invalid for HTTP/2.
The global blocking off applications setting Connection headers seems reasonable at first consideration but needs more thought/review in case there are use cases where it is arguably valid / necessary to do so.
Could you expand the SSE acronym please. I want to make sure I understand you correctly.
Code format issues are opening braces
{
should not be on a new line and multi-line comments either use//
before every line or, if the/* ... */
style is used each intermediate line starts with an aligned*
Ok, noted.
More generally...
It would be worth reviewing the HTTP/2 spec to check if there are any other headers that are invalid for HTTP/2.
To my knowledge, only the Connection headers.
The global blocking off applications setting Connection headers seems reasonable at first consideration but needs more thought/review in case there are use cases where it is arguably valid / necessary to do so.
At best, if the faulty header doesn't provoke an error (required by the specs), it will be ignored. Here's what the spec says:
Intermediaries that process HTTP requests or responses (i.e., any intermediary not acting as a tunnel) MUST NOT forward a malformed request or response. Malformed requests or responses that are detected MUST be treated as a stream error (Section 5.4.2) of type PROTOCOL_ERROR.
For malformed requests, a server MAY send an HTTP response prior to closing or resetting the stream. Clients MUST NOT accept a malformed response. Note that these requirements are intended to protect against several types of common attacks against HTTP; they are deliberately strict because being permissive can expose implementations to these vulnerabilities.
I will maintain my -1 For starters, any such HTTP/2 specific nonsense safety nets need to be added to StreamProcessor.prepareHeaders instead of other random locations.
The initial post says SHOULD, but after actually checking the spec it is a MUST. https://tools.ietf.org/html/rfc7540#section-8.1.2.2
It is really odd the specification made such a choice given how many applications toy with that header in an attempt to do something useful, and given they have no idea if they're using HTTP/1.1 or HTTP/2. So this looks to me like a spec mistake [for compatibility, they should have said it has to be ignored and can be freely removed, whatever], and it is not very surprising most HTTP/2 clients would not check this requirement.
@rmaucher The spec authors justify themselves of this strict policy by saying that otherwise it's a vector of attack. I don't know the whereabouts, here.
About where to put the filtering, in Response.addHeader() or StreamProcessor.prepareHeaders(), I understand your point of view (which is, I suppose, to gather all protocol stuff together) but targeting StreamProcessor doesn't respect the Fail Fast paradigm, which prevails IMHO.
Still, -1, again for your patch. In addition to being ugly, there's no provision in the Servlet spec to throw an exception on random header names, especially common ones, so failing, fast or slow, is wrong.
I've been reading the HTTP/2 RFC and there is more to this than simply blocking the connection
header.
- What the HTTP/2 and HTTP/1.1 specs suggest we should be doing in parsing an attempt to set the
connection
header and then blocking that header and and connection level headers it specifies whether set previously or not. - There is the general question of whether we should be targeting just HTTP/2 or whether we should be preventing applications doing this regardless of protocol.
We need to figure out what we actually want to do first.
I'm currently leaning towards introducing logging of attempts to set connection level headers with a warning that a future version will block the attempt. Probably with UserDataHelper
to keep log volumes down even though this isn't really user data.
Yes, it is accurate if there's a "connection: foobar" header, then there could be a "foobar" header and in that case it's tied to the connection header.
Note about my earlier "proposal" for HTTP/1.1, the connection header is used by the websocket.server.UpgradeUtil helper class to allow upgrade through the API. So it's not possible to filter it and be done, this would have to be fixed as well [not sure how]. Servlet apps can upgrade through the proper API and would not be affected.
Please note that there may be client components which may explicitly close connections with Connection: close
. E.g., in SpnegoAuthenticator
.
Indeed, there are two very legacy looking workarounds for brokeness in spnego. One of them can be removed since using Java 8u40 is not reasonable anymore. What about that connection close one ? IMO it's useless since there is no default, so an admin would have to figure out whatever broken clients are out there, then populate it. Not reasonable. If the feature is useful, then it should have a default with all the known broken clients.
@rmaucher The question is when is it justified for component or webapp code to close the connection? Should a container solely decide to close the connection?
In an ideal world, anything in the Connection header should only be the concern of the container. Historically that hasn't been possible due to broken clients. The question is, are we at the point where it would be possible? Logging a warning when an application attempts this is one way to help us find out. Users will complain about the log message. Whether they are able to fix the app and/or client in question will tell us how safe it would be for us to move to a ban.