pandas
pandas copied to clipboard
BUG: "ValueError: ndarray is not C-contiguous" when aggregating concatenated dataframe
Pandas version checks
-
[X] I have checked that this issue has not already been reported.
-
[X] I have confirmed this bug exists on the latest version of pandas.
-
[ ] I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
import numpy as np
import pandas as pd
for dtype in [np.int8, pd.Int8Dtype()]:
df1 = pd.DataFrame(
data={
"col1": pd.Series([1, 2, 3], index=["A", "B", "C"], dtype=dtype),
},
index=pd.Index(["A", "B", "C"], dtype=pd.StringDtype()),
)
df2 = pd.DataFrame(
data={
"col2": pd.Series([4, 5, 6], index=["D", "E", "F"], dtype=dtype),
},
index=pd.Index(["D", "E", "F"], dtype=pd.StringDtype()),
)
df = pd.concat([df1, df2])
group = df.T.groupby(df.columns)
max_df = group.max()
print(max_df)
print()
Issue Description
I'm running into a ValueError: ndarray is not C-contiguous exception when attempting to aggregate a concatenated dataframe. Strangely though, the exception only seems to occur if the column dtypes are pd.Int8Dtype(), but not if they are np.int8.
The example illustrates this issue. The first iteration of the loop runs properly, printing the following output:
A B C D E F
col1 1.0 2.0 3.0 NaN NaN NaN
col2 NaN NaN NaN 4.0 5.0 6.0
However, the second iteration raises an unhandled exception on the group.max() call. Here is my (sanitized) callstack:
Traceback (most recent call last):
File "/Users/redacted/git/myproject/foo.py", line 19, in <module>
max_df = group.max()
File "/Users/redacted/Library/Caches/pypoetry/virtualenvs/myproject-f89bfaSs-py3.9/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 3330, in max
return self._agg_general(
File "/Users/redacted/Library/Caches/pypoetry/virtualenvs/myproject-f89bfaSs-py3.9/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1906, in _agg_general
result = self._cython_agg_general(
File "/Users/redacted/Library/Caches/pypoetry/virtualenvs/myproject-f89bfaSs-py3.9/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1998, in _cython_agg_general
new_mgr = data.grouped_reduce(array_func)
File "/Users/redacted/Library/Caches/pypoetry/virtualenvs/myproject-f89bfaSs-py3.9/lib/python3.9/site-packages/pandas/core/internals/managers.py", line 1473, in grouped_reduce
applied = blk.apply(func)
File "/Users/redacted/Library/Caches/pypoetry/virtualenvs/myproject-f89bfaSs-py3.9/lib/python3.9/site-packages/pandas/core/internals/blocks.py", line 393, in apply
result = func(self.values, **kwargs)
File "/Users/redacted/Library/Caches/pypoetry/virtualenvs/myproject-f89bfaSs-py3.9/lib/python3.9/site-packages/pandas/core/groupby/groupby.py", line 1973, in array_func
result = self._grouper._cython_operation(
File "/Users/redacted/Library/Caches/pypoetry/virtualenvs/myproject-f89bfaSs-py3.9/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 830, in _cython_operation
return cy_op.cython_operation(
File "/Users/redacted/Library/Caches/pypoetry/virtualenvs/myproject-f89bfaSs-py3.9/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 540, in cython_operation
return values._groupby_op(
File "/Users/redacted/Library/Caches/pypoetry/virtualenvs/myproject-f89bfaSs-py3.9/lib/python3.9/site-packages/pandas/core/arrays/masked.py", line 1587, in _groupby_op
res_values = op._cython_op_ndim_compat(
File "/Users/redacted/Library/Caches/pypoetry/virtualenvs/myproject-f89bfaSs-py3.9/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 329, in _cython_op_ndim_compat
res = self._call_cython_op(
File "/Users/redacted/Library/Caches/pypoetry/virtualenvs/myproject-f89bfaSs-py3.9/lib/python3.9/site-packages/pandas/core/groupby/ops.py", line 418, in _call_cython_op
func(
File "groupby.pyx", line 1835, in pandas._libs.groupby.group_max
File "<stringsource>", line 663, in View.MemoryView.memoryview_cwrapper
File "<stringsource>", line 353, in View.MemoryView.memoryview.__cinit__
ValueError: ndarray is not C-contiguous
I discovered this issue when upgrading my code from pandas v1.5.3 to v2.2.0. With some trial and error though, I was able to determine that this issue was introduced with the v2.1.0 release. (i.e. this issue does not occur with v2.0.3)
Expected Behavior
I would expect both iterations of the for-loop in the example code to behave identically. However, only the first iteration succeeds; the second iteration raises an unhandled exception on the group.max() call.
Installed Versions
INSTALLED VERSIONS
commit : fd3f57170aa1af588ba877e8e28c158a20a4886d python : 3.9.17.final.0 python-bits : 64 OS : Darwin OS-release : 23.3.0 Version : Darwin Kernel Version 23.3.0: Wed Dec 20 21:28:58 PST 2023; root:xnu-10002.81.5~7/RELEASE_X86_64 machine : x86_64 processor : i386 byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8
pandas : 2.2.0 numpy : 1.26.4 pytz : 2024.1 dateutil : 2.8.2 setuptools : 69.0.3 pip : 23.0.1 Cython : None pytest : 7.4.4 hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : 2.9.9 jinja2 : 3.1.3 IPython : None pandas_datareader : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : 2023.12.2 gcsfs : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : 3.1.2 pandas_gbq : None pyarrow : 12.0.1 pyreadstat : None python-calamine : None pyxlsb : None s3fs : 2023.12.2 scipy : 1.12.0 sqlalchemy : 2.0.25 tables : None tabulate : 0.9.0 xarray : None xlrd : None zstandard : None tzdata : 2023.4 qtpy : None pyqt5 : None
Thanks for the report. Here is a simpler reproducer showing this isn't caused by concat, StringDtype columns, nor the presence of NA values:
dtype = "Int64"
df = pd.DataFrame(
{
"col1": pd.Series([1, 2, 3], dtype=dtype),
"col2": pd.Series([1, 2, 3], dtype=dtype),
},
index=pd.Index(["A", "B", "C"]),
)
print(df._mgr.blocks[0].values._data.flags['F_CONTIGUOUS'])
# True
print(df.T._mgr.blocks[0].values._data.flags['F_CONTIGUOUS'])
# False
Doing the same with non-EA dtypes, the transpose is F-contiguous (note: the groupby code takes a transpose, so F- and C-continuous flags get swapped).
@rhshadrach Slight correction to your example. The first print() statement actually outputs True, not False:
print(df._mgr.blocks[0].values._data.flags["F_CONTIGUOUS"])
# True
Thanks! Fixed.