A proposal for improving http-api capacity
There are http-api endpoints that don't scale very well. One instance is https://pine.radicle.garden:8777/v1/projects/rad:git:hnrk86sjyrxmt8nyqfd86ctudogdmphhg8f6y/commits?parent=47596419a5fcc9c360d5fee90e2fddbe733e1386&verified=true.
Many concurrent requests bottleneck on the access to the repository on disk.
Given that the read access dominates over write access, we can cache the responses on the HTTP level with Varnish or nuster. Taking the above API endpoint as an example, the commit SHA serves as a perfect ETag. We don't set any TTL so as to avoid producing any stale responses. The only case when such a cache would go out of date is when a client does git push --force. The cache can be then purged by making a special HTTP request from the Rust code to Varnish.
I approve of this. However I wonder if there's something we can use as part of axum to avoid complicating the deployment?
Not as a part of axum but we could cache computed responses with stretto. That, of course, invites one of the three hard problems in computer science.