polars icon indicating copy to clipboard operation
polars copied to clipboard

Forward_fill() and backward_fill() is about 25% slower in polars compared to pandas' counterparts

Open Chuck321123 opened this issue 10 months ago • 5 comments

Checks

  • [X] I have checked that this issue has not already been reported.
  • [X] I have confirmed this bug exists on the latest version of Polars.

Reproducible example

import pandas as pd
import numpy as np
import polars as pl
from datetime import datetime, timedelta, date

# Define parameters
num_rows = 280000
num_groups = 200

# Generate random data
data = {
    'group': np.random.choice([f'group_{i}' for i in range(num_groups)], size=num_rows),
    'random_value': np.random.rand(num_rows)
}

# Convert data to DataFrame
df = pd.DataFrame(data)

# Randomly set one-third of values to NaN
random_indices = np.random.choice(df.index, size=num_rows // 3, replace=False)
df.loc[random_indices, 'random_value'] = np.nan

df2 = pl.DataFrame(df)

%timeit df["random_value"].bfill()

%timeit df2.select(pl.col("random_value").backward_fill())

Log output

No response

Issue description

So polars backward_fill and forward_fill is about 25% slower as the bfill() and ffill() function in pandas. Would be nice if anyone could find a faster way to run these functions.

Expected behavior

That the polars equivalent is as fast or faster than the pandas counterpart

Installed versions

--------Version info---------
Polars:               0.20.18
Index type:           UInt32
Platform:             Windows-11-10.0.22631-SP0
Python:               3.12.2 | packaged by Anaconda, Inc. | (main, Feb 27 2024, 17:28:07) [MSC v.1916 64 bit (AMD64)]

----Optional dependencies----
adbc_driver_manager:  <not installed>
cloudpickle:          3.0.0
connectorx:           <not installed>
deltalake:            <not installed>
fastexcel:            <not installed>
fsspec:               <not installed>
gevent:               <not installed>
hvplot:               <not installed>
matplotlib:           3.8.3
nest_asyncio:         1.6.0
numpy:                1.26.4
openpyxl:             3.1.2
pandas:               2.2.1
pyarrow:              15.0.2
pydantic:             <not installed>
pyiceberg:            <not installed>
pyxlsb:               <not installed>
sqlalchemy:           <not installed>
xlsx2csv:             <not installed>
xlsxwriter:           <not installed>

Chuck321123 avatar Apr 04 '24 16:04 Chuck321123

I'm not seeing the same behavior:

242 ms ± 10.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
239 ms ± 22.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

d-reynol avatar Apr 04 '24 23:04 d-reynol

Reopening case as I still get faster results for pandas counterpart.

2.21 ms ± 80.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
2.86 ms ± 114 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Let me know if anyone gets something else

Chuck321123 avatar Apr 24 '24 18:04 Chuck321123

I don't think your test is apples to apples

doing df["random_value"].bfill() doesn't return a DataFrame. It returns a Series

A more apples to apples test would be compare two function calls that return a dataframe so something like

%%timeit
df2.with_columns(pl.col("random_value").backward_fill())
%%timeit
df.assign(a=df['a'].bfill())

When I do that comparison with 100M rows, 20% null. I get polars takes 795ms and pandas takes 1.44s

deanm0000 avatar Apr 25 '24 11:04 deanm0000

@deanm0000 I see. The whole idea is to create a new column in a dataframe where i do backwardfilling. By using with_columns instead of select i get the following results where polars is line number 2:

2.33 ms ± 709 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
3.25 ms ± 603 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

The pandas way of adding a new column/manipulating existing column is usually df["New_Col"] = ..., so would kind of be wrong to compare to assign in which "nobody" uses

Chuck321123 avatar Apr 25 '24 12:04 Chuck321123

I see your point but it's not a bug that pandas is faster for this operation.

Someone should correct me if I have this wrong but I think the difference is that numpy arrays are mutable whereas arrow arrays are immutable. That means when you just want to change a subset of values, pandas/numpy can do that inplace whereas when you want to perform the same operation with arrow arrays it has to rewrite all the values.

deanm0000 avatar Apr 25 '24 19:04 deanm0000

As mentioned by others, this is not a fair comparison as the input/output formats are different - we don't do in-place manipulation but generate a copy. Also, Polars actually has proper nulls (which means it has to look in a different memory location that contains the nulls), whereas Pandas only has to look at the values themselves since it uses NaNs.

Finally the original test of 280,000 rows is way too small - at that point you're almost benchmarking the Polars DSL parsing/optimizer more than the data manipulation itself.

Repeating the above experiment with 100M rows I get the following results on my Apple M2 machine:

378 ms ± 19.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
742 ms ± 10.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

I'm currently finishing a PR that would reduce the gap to this:

379 ms ± 5.53 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
544 ms ± 15.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

More improvement with branchless filling is possible still but low priority at the moment, as it's rather labour-intensive to write.

orlp avatar May 24 '24 16:05 orlp