marimo
marimo copied to clipboard
Inconsistent interaction between mo.ui.refresh and BuckarooWidget
Describe the bug
I am working on a styling gallery for buckaroo. there are many different configs behind a dropdown. I tried to iterate through configs with mo.ui.refresh
When I run this, especially at 1s, the Buckaroo UI becomes inconsistent. It seems to be more stable at 5 seconds.
This is likely due to buckaroo initial rendering speed.
Could someone give a onceover of my notebook and how I'm using refresh?
Also after running this for a while in WASM, I got a Memory Error.
There is a memory leak somewhere. I wonder if it would be better to pre-define the buckaroo widgets vs calling
DFViewerShortHelper in the top code block.
As is the case regularly, I don't think this is necessarily a bug with Marimo, I'm just trying to figure out the interactions between my widget and Marimo.
Environment
https://marimo.io/p/@paddy-mullen/buckaroo-refresh-state-bug This also happens locally
{
"marimo": "0.12.8",
"OS": "Darwin",
"OS Version": "22.6.0",
"Processor": "arm",
"Python Version": "3.13.2",
"Binaries": {
"Browser": "135.0.7049.85",
"Node": "v18.20.4"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.33.0",
"packaging": "24.2",
"psutil": "7.0.0",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.11.2",
"starlette": "0.46.1",
"tomlkit": "0.13.2",
"typing-extensions": "4.13.2",
"uvicorn": "0.34.0",
"websockets": "15.0.1"
},
"Optional Dependencies": {
"anywidget": "0.9.18",
"pandas": "2.2.3",
"polars": "1.26.0",
"pyarrow": "19.0.1"
},
"Experimental Flags": {}
}
Code to reproduce
https://marimo.io/p/@paddy-mullen/buckaroo-refresh-state-bug
The key cells are first where I display the UI
mo.vstack(
[
mo.hstack([dropdown_dict, refresh_timer]),
DFViewerShortHelper(dropdown_dict.value[0], column_config_overrides=dropdown_dict.value[1]),
mo.hstack([mo.md(dropdown_dict.value[2]), mo.ui.text_area(format_json(dropdown_dict.value[1]), disabled=True, max_length=500, rows=15, full_width=True)], widths="equal"),
]
)
next where I define the state variable
get_dd_val, set_dd_val = mo.state("colormap_config")
Then I define the dropdown
# The DFs and configs are defined in the above hidden cells. Unhide them for details
dfs = {
"float_config": float_config,
"colormap_config": colormap_config,
"color_from_column": color_from_col_config,
}
def DFViewerShortHelper(df, column_config_overrides):
bw = DFViewerInfinite(df, column_config_overrides=column_config_overrides, pinned_rows=[])
return bw
dropdown_dict = mo.ui.dropdown(
options=dfs,
value=get_dd_val(),
label="Choose the config",
)
and finally where I define refresh timer
def iter_num(callback_str):
period, wrapped_iter = callback_str.split(" ")
return int(wrapped_iter[1:-1])
dd_keys = list(dropdown_dict.options.keys())
def change_dd(callback_str):
count = iter_num(callback_str)
offset = count % len(dd_keys)
set_dd_val(dd_keys[offset])
refresh_timer = mo.ui.refresh(options=[1, 5, 10, 30], default_interval=5, label="interval", on_change=change_dd)
I haven't looked deeply enough, but my guess is the slow startup time from the anywidget data transfer.
- can you figure out how much data is being transfer / how long each startup time is?
- is the memory leak from marimo or your widget? any way you can profile this with the chrome profiler?
It looks like a python memory leak. in WASM. I made a reduced testcase that is basically a huge _ESM module in an anywidget
https://marimo.io/p/@paddy-mullen/notebook-h56lqy
Here are the relevant lines:
long_str = """abcdefghijklmnopqrstuvwxyz""" * 220_000
class CounterWidget(anywidget.AnyWidget):
_esm = f"console.log('{long_str}');" + """
function render({ model, el }) {
and the bit that gets refreshed
mo.vstack(
[
mo.hstack([dropdown_dict, refresh_timer]),
CounterWidget()])
This fails after 70-120 iterations.
I would expect each CounterWidget() call to be ephemeral, and to be ready for garbage collection once the cell executes next time.
I ran this locally against regular python and I saw memory usage increasing (calling gc.collect()) didn't reduce the memory usage.
Now that I understand the bug, I see a way around it. This isn't urgent for me. But I thought I'd flag the memory leak as something to look into.
Thanks for flagging and the minimal repro. This is super helpful for us when fixing