http-signatures
http-signatures copied to clipboard
Specify URL encoding for request-target
From Roman Bezlepkin:
Something the draft isn't clear on is whether the request-target (original request URL) must contain url-encoded or unencoded characters. For example @ symbol could be encoded as %40 whereas request-target could be either /foo/[email protected]/posts or /foo/bar%40example.org/posts while both resulting in a successful request I believe implementers should be careful when inspecting the server-side request URL in this case to validate the signature. In one case, the HttpContext.Request property in ASP.NET automatically URL-decodes the values and it becomes necessary to use a workaround to get at the original, un-decoded request URL.
Example workaround: https://stackoverflow.com/a/38747631
May I suggest clarifying this caveat with request-target and URL encoding of special characters in the draft?
I'll suggest that request-target MUST be the URL decoded form. The only thing I'm unsure of is if that may lead to corner case errors. An implementation could always use the regular form of request-target to check the signature, and if that doesn't work, try URL decoding request-target and attempting the signature check again. URL decoding may result in a security vulnerability... will have to think about that more.
I'd tend to vote the URL not be decoded. It seems like the intent of spec is to perform a signature of the content of the request (as it exists on the wire). Introducing a layer of decoding introduces variability between the entity that generates the signature and the entity that verifies the signature.
Simple case: A urlencoded newline would cause a new line in the signature string, so a crafted url could spoof a header in that string on the following line. No idea how this could be exploited, but then I'm not imaginative enough to think of the vast majority of exploits. Not desirable.
Hi folks. I'm hitting this issue in Production, can we get a consensus view?
As I stated earlier, I believe this protocol should be used to protect the content on-the-wire, before any decoding on receipt and after encoding for transmission.
Specifically, RFC7540 mentions nothing on interpretation of any characters, so to add any here will make the protocol more ambiguous and harder to navigate. Since we reference :path
from that document, we should avoid adding interpretation/transformation on top of that.
I'd go without {de,en}coding, which may introduce unexpected side-effects.
In https://github.com/WICG/webpackage/blob/master/draft-yasskin-httpbis-origin-signed-exchanges-impl.md#cross-origin-trust-cross-origin-trust they refer to requestURI as a bytesequence which iiuc doesn't mention decoding.
I'd go without {de,en}coding, which may introduce unexpected side-effects.
Agreed.
@liamdennehy, I think we have consensus here. Don't encode/decode :path, use it as-is.
I say this not having done a security review on that, but it seems the simplest/most straightforward thing to do. The drawback here is if the client encodes and signs the request URL in a strange way, or the server receives the request URL through a proxy that mangled the URL... but if that happens, the client clearly signed something different than the server is getting and thus the right thing to do is "fail closed", which means, return that the signature is invalid.
Well that ended up quick and easy. Thanks!
@ioggstream I know you're on a roll with MRs, mind if I write this one up?
I have run into this issue due to the equal sign (=)
This sign, when part of the value in a query element needs to be escaped in our opinion.
http://example.org/foo?qry=AABBCC=
needs to be put on the wire as http://example.org/foo?qry=AABBCC%3d
Hence my idea that, if the path and query are encoded;
- it should be hashed encoded at the client-side
- will it arrive encoded at the server
- where the (request-target) can be assembled again as received (encoded).
This then also entails that, if I would send in an invalid (un-escaped) request, I would hash that unescaped request and the same would be received by the server (hoping that a gateway did not change this into a valid (encoded) request), where the server would be able to recreate the correct (unescaped) request-target.
@liamdennehy, thanks for pointing me to this Github thread
@MartinSchorel We just resolved here to use as-is - that is the raw characters in the HTTP message as transmitted/received.
The key point here is that the HTTP standard(s) don't acknowledge escaping at all, so why should we?
The only reason the qry=
is not escaped and CC=
is comes down to the first being a parameter delimiter and the second a parameter value. In fact there no semantics within HTTP on what constitutes a valid string after the ?
, only that is is from a defined set of characters. By us forcing any implementation to interpret the value a particular way is actually a constraint - it is perfect fine for an implementation to do no (un)escaping at all because their implementation doesn't care.
@liamdennehy @MartinShorel
(hoping that a gateway did not change this into a valid (encoded) request),
from https://tools.ietf.org/html/rfc7230#section-5.7.2
A proxy MUST NOT modify the "absolute-path" and "query" parts of the received request-target when forwarding it to the next inbound server, except as noted above to replace an empty path with "/" or "*".
This behavior is retained in httpbis-semantics
https://tools.ietf.org/html/draft-ietf-httpbis-semantics-04#section-5.5.2
@liamdennehy
@ioggstream I know you're on a roll with MRs, mind if I write this one up?
do you mean PRs ? If you write a PR I'll comment that!
Hello, while working on an XMPP <=> ActivityPub gateway, I've been affected by this issue (for the context, Pleroma doesn't unquote and Mastodon does, resulting in one or the other being rejected).
What is the consensus on this point nowadays, to use the raw path without unquoting? Is there any plan to update the HTTP Signature spec?
Thanks!
What is the current status?
What is the current status?
The current spec is almost through the IETF standardization process. You might check the spec to see what the expected output is going to be for the official RFC:
https://www.ietf.org/archive/id/draft-ietf-httpbis-message-signatures-19.html
That said, A LOT has changed in the IETF RFC for HTTP Signatures... backwards compatibility was broken long ago, and it's unclear if Mastodon, Pleroma, etc. are going to update to the latest version (implementations exist, but are not as many as the old implementation).
My suggestion is to upgrade to the latest RFC version, because that's what's going to stay around for the next decade or more... but don't know how difficult that is going to be for implementers in Mastodon/Pleroma space.