`BoundsError` on `flush` in a threaded run.
I frequently get the following error. The progress bar is used in a threaded run, in JupyterLab, and is initialized with safe_lock=1. The error constantly happens during the flush call. Any suggestions for a quick workaround?
BoundsError: attempt to access MemoryRef{UInt8} at index [1]
Stacktrace:
[1] memoryref
@ ./boot.jl:523 [inlined]
[2] take!(io::IOBuffer)
@ Base ./iobuffer.jl:469
[3] send_stream(name::String)
@ IJulia ~/.julia/packages/IJulia/dR0lE/src/stdio.jl:144
[4] flush(io::IJulia.IJuliaStdio{Base.PipeEndpoint})
@ IJulia ~/.julia/packages/IJulia/dR0lE/src/stdio.jl:277
[5] _updateProgress!(p::ProgressMeter.Progress; showvalues::Tuple{}, truncate_lines::Bool, valuecolor::Symbol, offset::Int64, keep::Bool, desc::Nothing, ignore_predictor::Bool, force::Bool, color::Symbol, max_steps::Int64)
@ ProgressMeter ~/.julia/packages/ProgressMeter/kVZZH/src/ProgressMeter.jl:258
[6] _updateProgress!
@ ~/.julia/packages/ProgressMeter/kVZZH/src/ProgressMeter.jl:216 [inlined]
[7] #updateProgress!#9
@ ~/.julia/packages/ProgressMeter/kVZZH/src/ProgressMeter.jl:212 [inlined]
[8] updateProgress!
@ ~/.julia/packages/ProgressMeter/kVZZH/src/ProgressMeter.jl:210 [inlined]
[9] #19
@ ~/.julia/packages/ProgressMeter/kVZZH/src/ProgressMeter.jl:490 [inlined]
[10] #13
@ ~/.julia/packages/ProgressMeter/kVZZH/src/ProgressMeter.jl:454 [inlined]
[11] lock(f::ProgressMeter.var"#13#14"{ProgressMeter.var"#19#20"{@Kwargs{}, ProgressMeter.Progress, Int64}}, l::ReentrantLock)
@ Base ./lock.jl:232
[12] lock_if_threading
@ ~/.julia/packages/ProgressMeter/kVZZH/src/ProgressMeter.jl:453 [inlined]
does it happen only with JupyterLab? not in a local jupyter notebook or in the terminal? do you have a MWE? even if it happens 1/100 times, it's a start for testing
Maybe JupyterLab means something different, I'm running a jupyter-lab notebook on a remote machine (pretty sure if I ran locally it would still happen, as I'm just port forwarding to the remote machine). Haven't tried in a terminal. I think a MWE might be challenging because it might require a 256 thread machine 😆.
I'm not sure what's the difference, i've only ever used Ijulia with (i guess) the classic jupyter notebook
In addition to the classic Jupyter Notebook, IJulia also works with JupyterLab
does it happen often enough that you can try to replicate by removing the workload and repeating until it happens?
without a MWE, it might be challenging to help you ^^
https://github.com/JuliaLang/julia/issues/6297 might be similar but it's very old and doesn't really has a resolution
I reduced the call frequency of next! (only call it 1/6th of the time) which seems to mitigate the issue, though I'll need more time to know for sure. I'll also need some time to create a MWE, if it's even possible, since these types of bugs are highly dependent on the call pattern/workload.
I've reduce the call frequency of next! even more and haven't hit this error since. However, I did make a few attempts to recreate this error using the original call frequency and an artificial workload composted of random sleeps but no artificial workload seems to re-create the error.
This was actually a bug in IJulia.jl