ocsp_proxy
ocsp_proxy copied to clipboard
Problem with small certificate serial numbers
Certificates of my own PKI have small serial numbers (less than 256). Because of this there is a problem in line 409:
$ocsp_req->{'tbsRequest'}->{'requestList'}->[0]->{'reqCert'}->{'serialNumber'}->as_hex;
I think the ASN1 parser does not convert small serial numbers into an object but into an integer value that has no method. So I get this error for a certificate with serial number 49 (decimal):
Thread 2 terminated abnormally: Can't locate object method "as_hex" via package "49" (perhaps you forgot to load "49"?) at /usr/local/sbin/ocsp_proxy line 409.
Perhaps it is better not to convert the serial numbers to hexadecimal values for the cache key, and use
'_' . $ocsp_req->{'tbsRequest'}->{'requestList'}->[0]->{'reqCert'}->{'serialNumber'};
instead in line 409, so it will work for small and for large serial numbers.
Hi, sorry for responding so late. Could you please check https://github.com/philfry/ocsp_proxy/tree/small-serials and tell me whether or not it solves your problem?
I think your change should work. But I have already adapted your source code to my needs: e.g. add a new thread which does all write/delete operations in the Redis DB. The other threads put their write/delete requests to a thread queue, from which this new thread reads. Because of this you no longer need locks. If you like, I can send you my source code.
@augenblickliebhaber I suppose it would be nice when you could create a merge request, so that the original author (or other users) can decide.
I have created a fork with the adapted source code: https://github.com/augenblickliebhaber/ocsp_proxy (only the file ocsp_proxy.pl is changed)
Thanks for sharing. I like the idea of using an own thread for writing to redis.
But the locking is not because of concurrent write accesses to redis – redis is able to handle that on its own as the $redis->... calls are being serialised. Without locking two identical consecutive requests might send two requests to the ocsp server, i.e.:
- without locking:
| client 1 | client 2 | ocsp_proxy thread 1 | ocsp_proxy thread 2 |
|---|---|---|---|
| request $domain | - | - | - |
| - | - | check cache, nothing here | - |
| - | - | ask $ocspserver | - |
| - | request $domain | - | - |
| - | - | get response | check cache, nothing here |
| - | - | write cache | ask $ocspserver |
| get response | - | - | - |
| - | - | - | get response |
| - | - | - | write cache |
| - | get response | - | - |
- with locking:
| client 1 | client 2 | ocsp_proxy thread 1 | ocsp_proxy thread 2 |
|---|---|---|---|
| request $domain | - | - | - |
| - | - | lock | - |
| - | - | check cache, nothing here | - |
| - | - | ask $ocspserver | - |
| - | request $domain | - | - |
| - | - | get response | wait for lock |
| - | - | write cache | wait for lock |
| - | - | release lock | wait for lock |
| get response | - | - | lock |
| - | - | - | check_cache, found |
| - | get response | - | - |
Thanks for the hint, that Redis is able do handle concurrent write accesses on its own. The problem for me was, that the lock in your source code blocked access to all database entries and not just the one, that was used. During access (read or write) to a single entry, your lock also blocks access (read or write) to all other cache entries. The problem with the multiple (and unnecessary) responder request, you described, is not so bad for me.