Downloads.jl
Downloads.jl copied to clipboard
download fails with `FDWatcher: bad file descriptor (EBADF)`
On Julia 1.7.3, I've found that downloads sometimes fail with the following error:
UNHANDLED TASK ERROR: IOError: FDWatcher: bad file descriptor (EBADF)
Stacktrace:
[1] try_yieldto(undo::typeof(Base.ensure_rescheduled))
@ Base ./task.jl:812
[2] wait()
@ Base ./task.jl:872
[3] wait(c::Base.GenericCondition{Base.Threads.SpinLock})
@ Base ./condition.jl:123
[4] wait(fdw::FileWatching._FDWatcher; readable::Bool, writable::Bool)
@ FileWatching /usr/local/julia/share/julia/stdlib/v1.7/FileWatching/src/FileWatching.jl:533
[5] wait
@ /usr/local/julia/share/julia/stdlib/v1.7/FileWatching/src/FileWatching.jl:504 [inlined]
[6] macro expansion
@ /usr/local/julia/share/julia/stdlib/v1.7/Downloads/src/Curl/Multi.jl:166 [inlined]
The line this points to in the Downloads.jl source is https://github.com/JuliaLang/Downloads.jl/blob/b0f23d0ed40cf7c5c5896f957d8163a68dfbc805/src/Curl/Multi.jl#L166
"Sometimes" here means "after millions of S3 requests in the span of multiple days of runtime with retry
around the actual request-making code". (retry
using the default settings, so with the default ExponentialBackOff
schedule with a single retry). When this error occurred, it occurred multiple times, on multiple different pods (which by design are accessing different s3 URIs but still in the same region), so I'm wondering if it is somehow related to the "connection pool corruption" issue w/ AWS.jl. Another possibly relevant bit of context is that the code that actually is making the requests is actually doing an asyncmap
over dozens (<100) of small s3 GET requests.
I'm afraid this happened in a long-running job that I can't interact with directly and don't have a reprex that I can share, but wanted to open an issue in case someone else has seen this or has advice on how to debug or what other information would be useful!
"connection pool corruption" issue w/ AWS.jl
Ref https://github.com/JuliaCloud/AWS.jl/issues/552
When this error occurred, it occurred multiple times, on multiple different pods (which by design are accessing different s3 URIs but still in the same region), so I'm wondering if it is somehow related to the "connection pool corruption" issue w/ AWS.jl
I don't think that AWS#552 would explain this if they are happening on different pods, since each pod should have it's own downloader object, so they aren't sharing any connection pools between them. But maybe it could be something like: AWS flips out at some point in time, sends back garbage to each pod independently at the same time, it corrupts the downloader object in each pod's AWS module somehow, and then because we don't retry with fresh downloaders (which is what AWS#552 is about) they each fail on the retried attempts as well, so they all fail at the same time. (?)
Yup, the latter was my hunch.
@vtjnash: is there something we could do here? I'm guessing it's the sock
object that's messed up here, not just the watcher
object that's constructed from it. Given that, it seems like the only option is to error and terminate the download. I'm not entirely sure what the right way to do that is. Pass CURL_CSELECT_ERR
into curl_multi_socket_action
.
Isn't this a duplicate of https://github.com/JuliaLang/Downloads.jl/issues/186?
Possibly. @kleinschmidt, @ericphanson, can you try using a newer version of Downloads
? You can load a newer version by dev'ing this package and then changing the UUID. Or if possible just try Julia 1.8.
Will do, although without a clear reproducer I'm not sure how far we'll get! I take it the fix for #186 didn't get back ported to 1.7.3?
Not yet, but it could be.
Well, actually at this point it probably doesn't make sense because there's unlikely to be a 1.7.4 release, so I think the fix will be to go to 1.8 for this.
I get the same error, but I get it on my local computer and I get it pretty much every time I install or update something. This happens both on 1.7.3 and 1.8.0-rc1.
EDIT: Think I might not have restarted my computer in a while, and now after restarting I haven't seen it again so far. EDIT2: It's back on both 1.7 and 1.8
I'm still getting the same error with Julia 1.81...
Note that if #187 should have fixed this issue, it didn't make it to Julia v1.8, because the commit of Downloads.jl
used in Julia v1.8.1 is https://github.com/JuliaLang/Downloads.jl/commit/2a21b1536aec0219c6bdb78dbb6570fc31a40983, which predates #187.
I am seeing this in CI logs on 1.8.0 as well:
Unhandled Task ERROR: IOError: FDWatcher: bad file descriptor (EBADF)
Stacktrace:
[1] try_yieldto(undo::typeof(Base.ensure_rescheduled))
@ Base ./task.jl:871
[2] wait()
@ Base ./task.jl:931
[3] wait(c::Base.GenericCondition{Base.Threads.SpinLock})
@ Base ./condition.jl:124
[4] _wait(fdw::FileWatching._FDWatcher, mask::FileWatching.FDEvent)
@ FileWatching /opt/hostedtoolcache/julia/1.8.0/x64/share/julia/stdlib/v1.8/FileWatching/src/FileWatching.jl:535
[5] wait(fdw::FileWatching.FDWatcher)
@ FileWatching /opt/hostedtoolcache/julia/1.8.0/x64/share/julia/stdlib/v1.8/FileWatching/src/FileWatching.jl:563
[6] macro expansion
@ /opt/hostedtoolcache/julia/1.8.0/x64/share/julia/stdlib/v1.8/Downloads/src/Curl/Multi.jl:166 [inlined]
[7] (::Downloads.Curl.var"#40#46"{Int32, FileWatching.FDWatcher, Downloads.Curl.Multi})()
@ Downloads.Curl ./task.jl:484
Unhandled Task ERROR: IOError: FDWatcher: bad file descriptor (EBADF)
Stacktrace:
[1] try_yieldto(undo::typeof(Base.ensure_rescheduled))
@ Base ./task.jl:871
[2] wait()
@ Base ./task.jl:931
[3] wait(c::Base.GenericCondition{Base.Threads.SpinLock})
@ Base ./condition.jl:124
[4] _wait(fdw::FileWatching._FDWatcher, mask::FileWatching.FDEvent)
@ FileWatching /opt/hostedtoolcache/julia/1.8.0/x64/share/julia/stdlib/v1.8/FileWatching/src/FileWatching.jl:535
[5] wait(fdw::FileWatching.FDWatcher)
@ FileWatching /opt/hostedtoolcache/julia/1.8.0/x64/share/julia/stdlib/v1.8/FileWatching/src/FileWatching.jl:563
[6] macro expansion
@ /opt/hostedtoolcache/julia/1.8.0/x64/share/julia/stdlib/v1.8/Downloads/src/Curl/Multi.jl:166 [inlined]
[7] (::Downloads.Curl.var"#40#46"{Int32, FileWatching.FDWatcher, Downloads.Curl.Multi})()
@ Downloads.Curl ./task.jl:484
My tests didn't actually fail though; maybe it happened somewhere where I have retries.
I see Julia v1.8.2 is on the same commit as 1.8.1, so sounds like that won't be a fix either. Maybe we can backport it for 1.8.3?
I don't think we've added any features to Downloads since then so we can probably just bump Downloads on 1.8 to latest.