actix-web
actix-web copied to clipboard
Infinite Request Pending on Uploading Large Multipart Files
Expected Behavior When uploading a file larger than the allowed multipart file size limit (50 MB in this case), the server should promptly respond with an appropriate error message, such as "payload reached size limit", and the request should not be left pending indefinitely.
Current Behavior When a file of size around 51 MB is uploaded, the server responds correctly with "payload reached size limit". However, if a much larger file, say around 200 MB, is uploaded, the request remains pending indefinitely, although a "Multipart error" is logged in the console.
[dependencies]
actix-web = "4.4"
actix-multipart = "0.6.1"
#[derive(Debug, MultipartForm)]
struct UploadForm {
owner: Text<String>,
#[multipart(rename = "file")]
files: Vec<TempFile>,
}
#[post("/upload")]
async fn upload(MultipartForm(form): MultipartForm<UploadForm>) -> Result<impl Responder, Error> {
let owner: &str = &form.creator;
log::info!("Uploading {} file(s) to {}", file_count, owner);
for f in form.files {
let path = format!("{}/{}", UPLOAD_DIR, f.file_name.unwrap());
log::info!("Saving to {path}");
f.file.persist(path).unwrap();
}
Ok(HttpResponse::Ok().body("File uploaded."))
}
#[get("/")]
async fn index() -> Result<HttpResponse, Error> {
let html = r#"<html>
<head><title>Upload Test</title></head>
<body>
<form action="/upload" method="post" enctype="multipart/form-data">
<input type="file" multiple name="file"/>
<input type="text" name="namespace" value="default"/>
<button type="submit">Submit</button>
</form>
</body>
</html>"#;
Ok(HttpResponse::Ok().body(html))
}
fn handle_multipart_error(err: MultipartError, req: &HttpRequest) -> Error {
log::error!("Multipart error: {}", err);
return Error::from(err);
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
env_logger::init_from_env(env_logger::Env::new().default_filter_or("info"));
log::info!("Starting HTTP server at {}:{}", HOST, PORT);
HttpServer::new(move || {
App::new()
.wrap(Logger::default())
.app_data(
MultipartFormConfig::default()
.total_limit(50 * 1024 * 1024) // 50 MB
.memory_limit(10 * 1024 * 1024) // 10 MB
.error_handler(handle_multipart_error),
)
.service(index)
.service(upload)
})
.workers(2)
.bind((HOST, PORT))?
.run()
.await
}
I wonder if this is related to:
https://github.com/actix/actix-web/issues/2695 https://github.com/actix/actix-web/issues/2357
We have a very large API for our application. Among other things, we handle very large files(large video files for transcoding), documents, etc with actix, and unless we:
pub async fn drain_unused_payload(mut payload: Multipart) -> SystemSyncResult<()> {
while let Ok(Some(_)) = payload.try_next().await {}
//payload.for_each(|_| ready(())).await;
Ok(())
}
On a request that errors or returns early, the connections will(can) hang indefinitely. There are many reasons, like your example to short-circuit an upload, and this is a work-around that requires the server to read out the entire stream....but it does resolve the problem.
This is a long standing issue that hasn't been resolved and has been moved quite a few milestones over more than a year.
I'm new to web stuff and just doing this as a hobby, but I'm a bit surprised that this still exists as an issue if known for so long. Seems like a faily big deal for a proffesional piece of software.
Using the most practical way of uploading, using MultipartForm from this issue, you will have stuck connections anytime a user tries to upload something too large. The server logs the 400 if i remember correctly, but the other side does not see anything.
My own hooby project encountered this, and it seems odd that you are left the following choices:
- Handling a stuck connection on the client side (i guess a timeout will do, sucks if you got very slow internet though)
- Accepting the entire stream before returning an error (which is far from ideal if a users wants to try to uplaod 5 GB when the file limit is 5 MB)
- Let the client be stuck
Has someone found a manual workaround to close the connection, my digging took long to even get here since I assumed this was an issue on my client side at first.
Has someone found a manual workaround to close the connection, my digging took long to even get here since I assumed this was an issue on my client side at first.
let mut response = HttpResponse::NoContent().finish();
response.head_mut().set_connection_type(ConnectionType::Close);
let response = HttpResponse::NoContent().force_close().finish();
@vitdevelop Thanks for sharing that for me and others, I actually managed to do the same thing, just a worse more roundabout way. But I'm not smart enough to understand how I can make a Multipart form do that, since it never hit my endpoint code. So I ended up doing the payload processing manually to return that response.
My gripe is that, be it Firefox or HTMX on my client side, sometimes the upload is attempted twice when the connection is forcefully closed, with no way for me to prevent on the client side. The progress is reset back to start and goes back up until the size limit is hit again.
I'm also new to networking, so I'm not sure if it's even possible, but with this way of force closing the connection the client doesn't even see the error response, nor any returned body/message. At least not the way my client side (HTMX/Ajax) handles it. But for my project, the way things are with the manual workaround is more than enough. Apart from my perfectionism not being satisfied...