ipykernel icon indicating copy to clipboard operation
ipykernel copied to clipboard

Memory Error: IOStream.Flush timed out

Open acerock6 opened this issue 7 years ago • 26 comments

Hi,

I was running a long session on Jupyter notebook. I mistakenly ran a function with no variables and the kernel got stuck. After a while, I saw the error: Memory Error., IOStream.Flush timed out When I checked the OS task manager, there was sufficient memory present. What's weird is the kernel got stuck at a simple function call with no variables (something like do_something() ) Anyone has any idea how this might have occurred and what's the best way to avoid the same in the future (I lost 10-12 hours of work due to this error) TIA

acerock6 avatar Aug 15 '18 07:08 acerock6

It happened to me also, while trying to see the output of a function that used multiprocessing.

RubensZimbres avatar Dec 17 '18 13:12 RubensZimbres

I had the same issue when running a program in jupyter kernel that also was using multiprocessing module.

MustafaWaheed91 avatar Jun 07 '19 00:06 MustafaWaheed91

I am also having the same issue.

Abhishek-31 avatar Jun 18 '19 16:06 Abhishek-31

me too :/

kribu avatar Jul 01 '19 18:07 kribu

Same here. Jupyter was up for 4 days and I had to kill it by control-C.

Happened again. Jupyter notebook was running on AWS and I was using papermill to run notebooks.

yuhang-lin avatar Sep 28 '19 18:09 yuhang-lin

Getting this issue when trying to import a module installed for local development ( pip install -e . )

zinglax avatar Oct 01 '19 12:10 zinglax

after upgrading my Mac from Mojave to Catalina, I experience the same issue with a phrase of code that worked before.

omerb01 avatar Oct 11 '19 09:10 omerb01

I am having this problem as well. Anyone find a reason for this and how to address it? I was running a tsfresh job

from tsfresh import select_features

rel = select_features(extracted_features, y)

Schmidtbit avatar Oct 21 '19 16:10 Schmidtbit

me too :/

lymweb avatar May 14 '20 01:05 lymweb

Hi, I had the same problem and it was solved by restarting jupyter. Exit and then jupyter notebook command again.

rafailagln avatar May 27 '20 12:05 rafailagln

...me too

Kimchangheon avatar Oct 16 '20 05:10 Kimchangheon

Same problem here, instead of exit and relaunch the jupyter notebook, is there any way to fix it inside the program? thanks.

waterflyer avatar Mar 15 '21 19:03 waterflyer

Just ran into this problem too

jkim2mozjay avatar Jul 05 '21 23:07 jkim2mozjay

running into this problem myself :/

Pranav-Goel avatar Aug 13 '21 18:08 Pranav-Goel

me to

wfxu avatar Aug 25 '21 07:08 wfxu

I'm also having this problem applying a function to a dask dataframe.

StephanieAuer-Moodys avatar Nov 20 '21 18:11 StephanieAuer-Moodys

This problem is still happening, l lyrics_list_clean = list(allpossible['letras'].apply(clean_lyrics, convert_dtype=str))
freezes at only about 200, again at about 900, then no progress at all

No amount of restarting jupyter helps

jaded0 avatar Jan 23 '22 04:01 jaded0

In Jupyter Notebook I put a 1 second sleep in the iterations that solved it for me.

import time

for i in list:
    do_multithreading_thing()
    time.sleep(1)

daniel-fry avatar May 04 '22 12:05 daniel-fry

same thing here

Reapor-Yurnero avatar May 10 '22 22:05 Reapor-Yurnero

I coded 'warnings.simplefilter("error")' in jupyter notebook and got that error. OS macOS Mojave Python 3.9.12 jupyter 1.0.0 tornado 6.2 error20220727

pycetra avatar Jul 27 '22 08:07 pycetra

same here when using concurrent.futures.ProcessPoolExecutor()

Lospub avatar Sep 09 '22 05:09 Lospub

Same here, but currently trying multiprocess (forked version of standard module multiprocessing). I have a continuously running script that uses multiprocess that always crashes after many hours of running. Six hours running with multiprocess atm, but that's nothing special, beyond 48 hrs would be improvement. Update: I took a good look at my code and removed all the dels and gc.collects that were a holdover from before using multiprocess where I was attempting to resolve a memory leak from repeatedly plotting with matplotlib. This was likely causing an error trying to reference variables that had been deleted. So far so good continuously running a script with multiprocessing for ~24hrs.

Update: Multiprocessing crashed after 3 days. Multiprocess has now been running for over 4 days. However, memory usage is increasing ~2% per day.

Another update: Died after 4 days. All I see for errors is a bunch of "WARNING:traitlets:kernel died: 25.000293016433716" Time to implement logging...

gwild avatar Dec 18 '22 02:12 gwild

Set the size of the output cache. The default is 1000, you can change it permanently in your config file. Setting it to 0 completely disables the caching system, and the minimum value accepted is 20 (if you provide a value less than 20, it is reset to 0 and a warning is issued). This limit is defined because otherwise you’ll spend more time re-flushing a too small cache than working link

nice job

emad313a avatar Dec 19 '22 06:12 emad313a

got the same problem.

when running !ls in ipython it takes <1sec. But in jupyterlab or in vscode notebooks, it takes ~20 sec with "IOStream.flush timed out" warnings

I have added c.InteractiveShell.cache_size = 0 in ipython_config.py but it has no effect.

more help would be welcomed.

castorfou avatar May 13 '23 18:05 castorfou

After quite a lot of head scratching I came up with this solution and explanation: https://stackoverflow.com/a/77172656/2305175

TL;DR: Try limiting your output to console as much as possible while working with threads/tasks and especially waiting on them. This also means limiting tqdm usage/updates.

fvolchyok avatar Sep 25 '23 12:09 fvolchyok

In your env(mine is noob) , modify flush_timeout in OutStream class in iostream.py as u like , I am using py 3.11 image

kate anaconda3/envs/noob/lib/python3.11/site-packages/ipykernel/iostream.py

and return it again after doing your stuff

noobandnothing avatar Dec 24 '23 03:12 noobandnothing