Add Fairing for Post-Response Operations
What's missing?
Rocket currently lacks a way to perform operations after a response has been fully sent to the client. For example, I frequently want to log metrics or trigger cleanup tasks after the entire response lifecycle has completed.
While Rocket provides fairings for pre-request, pre-response, and post-request operations, there isn't a clear mechanism for hooking into the point where the response has been completely transmitted to the client.
This functionality would enable developers to:
- Log metrics about the response lifecycle (e.g., latency, success/failure).
- Perform cleanup tasks such as closing resources or rolling back temporary state.
- Trigger asynchronous tasks that should only execute after the response is delivered.
For example, I want to write:
#[launch]
fn rocket() -> _ {
rocket::build()
.attach(PostResponseFairing)
}
Where PostResponseFairing might log metrics like:
impl Fairing for PostResponseFairing {
fn on_post_response(&self, response: &Response, client_ip: IpAddr) {
log::info!("Response sent to client at {client_ip}: {:?}", response);
}
}
Ideal Solution
The ideal solution would involve extending the Fairing trait to include a new method for handling operations once the response has been fully sent.
trait Fairing {
fn on_post_response(&self, response: &Response, client_ip: IpAddr);
}
This method would be called by Rocket's internals after the response is fully written to the socket but before the connection is closed. The method could receive the Response object and optionally the client's IP address or any relevant metadata.
This feature would work seamlessly alongside existing fairings like on_request and on_response.
Why can't this be implemented outside of Rocket?
This feature can't be implemented outside of Rocket without compromise because Rocket currently does not expose any hook or extension point for operations that occur after the response is sent.
While middleware or request guards can manage operations at earlier points in the request/response lifecycle, these do not account for actions that require knowledge of the final state (e.g., ensuring the response has been transmitted).
A fairing for post-response operations would integrate deeply into Rocket's response handling, enabling safe and consistent behavior across applications.
Are there workarounds usable today?
No response
Alternative Solutions
No response
Additional Context
No response
System Checks
- [X] I do not believe that this feature can or should be implemented outside of Rocket.
- [X] I was unable to find a previous request for this feature.
Many of these goals can be acheived in better ways, using the latest version of Rocket on the master branch.
Log metrics about the response lifecycle (e.g., latency, success/failure).
Rocket (at least on master) has switched from log to tracing, and Rocket provides a span for each request. This type of data can be collected using tracing subscribers, which might even have a pre-built solution for the type of statistics you want to collect.
Perform cleanup tasks such as closing resources or rolling back temporary state.
This should generally be handled by a Drop impl for the relevant type. The response is dropped after it has been fully transmitted (or the client disconnected). (Note - the Responder type is dropped earlier, but a streamed body will not be dropped until after the response has been sent).
Trigger asynchronous tasks that should only execute after the response is delivered.
This can be handled much like the previous one, but I'm curious what type of tasks you want to trigger here. Technically, only the client actually knows when, whether and what response was delivered. There are a myriad of network failures that can leave either the server or client unsure whether their message was received, and nothing can fully prevent this. Rather, I would recommend looking into designing your API with idempotence in mind.
I want to log and save request data to the database after the response has been fully sent to the client. The API streams its response, so capturing the complete request/response lifecycle is essential. Currently, I handle this asynchronously with a delay (using tokio::sleep) to allow the stream to complete before saving to the database. However, this approach is unreliable and not scalable, especially for long or dynamic streams. I'm looking for a more robust and sustainable solution.
You would be best off with a tracing subscriber. Rocket exposes a span that fully covers the request (from when Rocket starts processing it, to when Rocket finishes transmitting the response).
This might look something like this:
struct DbSubscriber {}
struct RequestMeta {
start: Instant,
}
impl<S: Subscriber + for<'a> LookupSpan<'a>> Layer<S> for DbSubscriber {
fn on_new_span(&self, _: &Attributes<'_>, id: &Id, ctxt: Context<'_, S>) {
let span = ctxt.span(id).expect("new_span: span does not exist");
if span.name() == "request" {
span.extensions_mut().insert(RequestMeta {
start: Instant::now(),
});
}
}
fn on_close(&self, id: Id, ctxt: Context<'_, S>) {
if let Some(meta) = ctxt
.span(&id)
.expect("close_span: span does not exist")
.extensions()
.get::<RequestMeta>()
{
// Kick off database job to store request data
}
}
}
// In launch:
tracing_subscriber::registry()
.with(RequestId::layer())
.with(RocketFmt::<Pretty>::default())
.with(DbSubscriber {})
.init();
I’m trying to understand how to trigger the on_new_span and on_close methods, and I’ve been attempting it as shown below, but it’s not having any effect:
#[post("/<model>", data = "<data>")]
async fn handle_model(
l402_info: l402::L402Info,
model: &str,
data: Data<'_>,
auth_and_payment_url: AuthorizationAndPaymentUrl,
db_pool: &State<sqlx::PgPool>,
) -> Result<
ReaderStream![StreamReader<impl Stream<Item = Result<Bytes, std::io::Error>>, Bytes>],
(Status, Json<Response>),
> {
let span = span!(Level::INFO, "my_span");
let _entered = span.enter();
// other logic
}
Additionally, do I need to access the context within the route in order to modify any span?
Slightly off topic, but how are spans passed in Rocket? I thought tracing wasn't supported?
I would think you'd pass a span using the State API, if anything?
- https://rocket.rs/guide/v0.5/state/#state
+1 for a new fairing trait for post_response
Might want to look at instrument, as that might implicitly pass the span to a response fairing. ( I haven't tested that edge case though )
I thought tracing wasn't supported?
On master branch, Rocket has switched to using tracing for all logging. This will not be part of a complete release until Rocket 0.6 at the earliest.
I’m trying to understand how to trigger the
on_new_spanandon_closemethods, and I’ve been attempting it as shown below, but it’s not having any effect:#[post("/<model>", data = "<data>")] async fn handle_model( l402_info: l402::L402Info, model: &str, data: Data<'_>, auth_and_payment_url: AuthorizationAndPaymentUrl, db_pool: &State<sqlx::PgPool>, ) -> Result< ReaderStream![StreamReader<impl Stream<Item = Result<Bytes, std::io::Error>>, Bytes>], (Status, Json<Response>), > { let span = span!(Level::INFO, "my_span"); let _entered = span.enter(); // other logic }Additionally, do I need to access the context within the route in order to modify any span?
Any leads for this @the10thWiz
span!(Level::INFO, "my_span")
hi @DhananjayPurohit , try
let _my_span = tracing::info_span!("my_span").entered();
or
#[tracing::instrument("my_span", skip(db_pool))
#[post("/<model>", data = "<data>")]
async fn handle_model(...)
@the10thWiz , that is fantastic news. Do you need someone to help test the implementation? ( volunteering myself )
@jbcurtin It wouldn't hurt. @SergioBenitez actually created the implementation, and he has done a decent amount of testing.