pandas
pandas copied to clipboard
BUG: not reproducible error `FloatingPointError: overflow encountered in multiply` in the following sequence: read_csv followed by to_datetime with pandas version 2.2.2
Pandas version checks
-
[X] I have checked that this issue has not already been reported.
-
[X] I have confirmed this bug exists on the latest version of pandas.
-
[ ] I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
import pandas as pd
for ii in range(10000):
df = pd.read_csv("data.csv", dtype={"ts": float}) # data.csv provided in an attached file
pd.to_datetime(df["ts"], unit="s", errors="coerce")
Issue Description
I, sometimes, get the following error with pandas 2.2.2 (I don't have this error with pandas 2.1.4):
Exception has occurred: FloatingPointError overflow encountered in multiply File ".../main.py", line 218, in
pd.to_datetime(df["ts"], unit="s", errors="coerce") FloatingPointError: overflow encountered in multiply
The error is not repeatable, hence the loop. I tried to reduce as much as possible the input file while keeping the raised error, this is why I provided a csv file with 200 rows, attached to this issue. I don't know if the issue is due to the read_csv
(I got the same problem with read_parquet
) or due to to_datetime
. If the read_csv
is outside the loop and I make a deepcopy at the beginning of each loop, I don't have the problem, so my hunch is that this is linked to the reading process (read_csv
in the example).
Expected Behavior
I expect the loop content to have the same behaviour, works every time or fails every time.
Installed Versions
INSTALLED VERSIONS
commit : d9cdd2ee5a58015ef6f4d15c7226110c9aab8140 python : 3.11.8.final.0 python-bits : 64 OS : Linux OS-release : 5.15.0-105-generic Version : #115~20.04.1-Ubuntu SMP Mon Apr 15 17:33:04 UTC 2024 machine : x86_64 processor : x86_64 byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8
pandas : 2.2.2 numpy : 1.26.4 pytz : 2024.1 dateutil : 2.9.0.post0 setuptools : 65.5.0 pip : 24.0 Cython : None pytest : None hypothesis : None sphinx : None blosc : None feather : None xlsxwriter : None lxml.etree : None html5lib : None pymysql : None psycopg2 : None jinja2 : None IPython : None pandas_datareader : None adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None gcsfs : None matplotlib : None numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None pyarrow : 16.0.0 pyreadstat : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : None tables : None tabulate : None xarray : None xlrd : None zstandard : None tzdata : 2024.1 qtpy : None pyqt5 : None
I also see this FloatingPointError: overflow encountered in multiply error from time to time and it's always with pd.to_datetime where errors=coerce or ignore
so far I have done code like below and it would always work even when encountering the random FloatingPointError confirmed by the prints. basically a retry would work
try:
c=df["a"].copy()
df["a"] = pd.to_datetime(df["a"], ... errors="coerce")
except FloatingPointError:
print("pandas FloatingPointError on column a")
df["a"] = pd.to_datetime(c, ... errors="coerce")
you may want to try using Python pickle to preserve Python objects. I find pandas's csv and excel read/write to be unreliable
as an example, having 2 columns of str, one with phone numbers where area codes all starts with 0 and the other column are empty string, then pandas to/read csv and excel would save them to file without problem but on reading back you would get 2 columns of timestamps and/or other non-str objects
I'm seeing the exact same issue as the OP. No csv read/write in my case, just a pd.to_datetime call that randomly fails, but if re-run on the exact same data then succeeds...
I also have this issue occurring when reading in a parquet file to a df
then attempting to do pd.to_datetime(df['col_name'],unit='s', utc=True, errors='ignore')
.
I also have the exact same behavior when using pd.to_datetime
in combination with read_json
.
I converted the floats to integers and it got rid of the error as far as i can tell if that works for anyone else's use case.
I have the same issue occuring when read from a csv file and then pd.to_datetime(x * 3600, unit="s", origin=origin, errors="coerce"). And this FloatingPointError: overflow encountered in multiply error from time to time and on different columns in the same dataframe.
I converted the floats to integers and it got rid of the error as far as i can tell if that works for anyone else's use case.
Would this result in a loss of accuracy or precision?
Any updates on this? I've still got Pandas pinned to 2.1.4 because of this bug.