reuse-tool
reuse-tool copied to clipboard
reuse badge loads slowly, API server responds late
The badge in github looks bad, because there is a problem with the API server. (Not sure, if this is the right place to report)
httpstat https://api.reuse.software/info/github.com/scsitape/stenc/
Connected to 213.95.165.53:443 from 192.168.x.xxx
HTTP/2 200
content-type: text/html; charset=utf-8
date: Thu, 25 Aug 2022 12:01:23 GMT
server: Caddy
content-length: 7556
DNS Lookup TCP Connection TLS Handshake Server Processing Content Transfer
[ 0ms | 18ms | 20ms | 14184ms | 1ms ]
| | | | |
namelookup:0ms | | | |
connect:18ms | | |
pretransfer:38ms | |
starttransfer:14222ms |
total:14223ms
Thanks for the report. The API's repo is here, but you probably cannot create an issue there without an FSFE account: https://git.fsfe.org/reuse/api
I tried to reproduce your problem with the exact same tool, but I barely get more than 4000ms. But I noticed a temporary slowdown myself in the past, but couldn't find the bottleneck. Perhaps it is a larger amount of requests at the same time, and that the database is locked for the current request until all other requests are handled.
Thank you for the link. You are right, I can not create a new issue there.
It seems to become very slow from time to time.
I think it should never be > 500 ms
, if it works fine. I guess that the server periodically scans the sites for changes and keeps the old result in the cache, if there was no change.
httpstat https://api.reuse.software/info/github.com/scsitape/stenc/
Connected to 213.95.165.53:443 from 192.168.x.x:41066
HTTP/2 200
content-type: text/html; charset=utf-8
date: Thu, 25 Aug 2022 14:28:10 GMT
server: Caddy
content-length: 7556
Body stored in: /tmp/tmpccjudmsb
DNS Lookup TCP Connection TLS Handshake Server Processing Content Transfer
[ 0ms | 17ms | 21ms | 850ms | 1ms ]
| | | | |
namelookup:0ms | | | |
connect:17ms | | |
pretransfer:38ms | |
starttransfer:888ms |
total:889ms
httpstat https://api.reuse.software/info/github.com/scsitape/stenc/
Connected to 213.95.165.53:443 from 192.168.x.x:58992
HTTP/2 200
content-type: text/html; charset=utf-8
date: Thu, 25 Aug 2022 15:55:02 GMT
server: Caddy
content-length: 7556
Body stored in: /tmp/tmpbh5xv79c
DNS Lookup TCP Connection TLS Handshake Server Processing Content Transfer
[ 223ms | 17ms | 21ms | 11666ms | 1ms ]
| | | | |
namelookup:223ms | | | |
connect:240ms | | |
pretransfer:261ms | |
starttransfer:11927ms |
total:11928ms
Perhaps it is a larger amount of requests at the same time, and that the database is locked for the current request until all other requests are handled.
This could fairly easily be remedied by switching from SQLite to PostgreSQL.
Yes, I also guess this could be the reason. Does sqlite lock the whole file when another operation is going on?
And do we have a way to reproduce or measure this bottleneck? I would want to avoid that someone changes the database engine and the original problem still persists.
This should be fixed now. Feel free to reopen if the issue pops up again.