datacompy
datacompy copied to clipboard
Inequality when comparing two empty numpy arrays
Comparison of two empty numpy
arrays currently return False
, which results in showing diffs where there shouldn't be.
This is due to the way numpy
compares empty arrays.
Running bool(np.array([]) == np.array([]))
returns False
and throws this warning:
The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.
Reproduce this bug with:
df1 = pd.DataFrame({"some_col": [np.array([]) for _ in range(10)], "id": [i for i in range(10)]})
df2 = pd.DataFrame({"some_col": [np.array([]) for _ in range(10)], "id": [i for i in range(10)]})
pdcompare = datacompy.Compare(df1, df2, join_columns="id")
print(pdcompare.report())
output:
DataComPy Comparison
--------------------
DataFrame Summary
-----------------
DataFrame Columns Rows
0 df1 2 10
1 df2 2 10
Column Summary
--------------
Number of columns in common: 2
Number of columns in df1 but not in df2: 0
Number of columns in df2 but not in df1: 0
Row Summary
-----------
Matched on: id
Any duplicates on match values: No
Absolute Tolerance: 0
Relative Tolerance: 0
Number of rows in common: 10
Number of rows in df1 but not in df2: 0
Number of rows in df2 but not in df1: 0
Number of rows with some compared columns unequal: 10
Number of rows with all compared columns equal: 0
Column Comparison
-----------------
Number of columns compared with some values unequal: 1
Number of columns compared with all values equal: 1
Total number of values which compare unequal: 10
Columns with Unequal Values or Types
------------------------------------
Column df1 dtype df2 dtype # Unequal Max Diff # Null Diff
0 some_col object object 10 0 0
Sample Rows with Unequal Values
-------------------------------
id some_col (df1) some_col (df2)
9 9 [] []
0 0 [] []
3 3 [] []
7 7 [] []
5 5 [] []
1 1 [] []
4 4 [] []
2 2 [] []
8 8 [] []
6 6 [] []
Hey @simonwongwong hope all is well! I'll take a closer look at this sometime this week. Thanks for bringing this up and opening up the issue.
@jborchma Just want to circle back on this. Thoughts on just checking if both are empty and throwing an exception? This might be something which is never encountered (comparing 2 empty dataframes)
So technically two empty dataframes should be equal. Maybe we could return True
and a log message?
I'm aligned with that. I'll try and do a quick PR here.
@jborchma So it seems like @simonwongwong is comparing arrays here. So obviously empty arrays make sense. But the following also doesn't work
df1 = pd.DataFrame({"some_col": [np.array([1,2]) for _ in range(10)], "id": [i for i in range(10)]})
df2 = pd.DataFrame({"some_col": [np.array([1,2]) for _ in range(10)], "id": [i for i in range(10)]})
Mainly due to the fact that: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
0
is a issue. 1
element works fine, and than >1
is also an issue.
I guess this isn't a simple change but I'll look into it. I guess we never really had a usecase where the field was an array.
@simonwongwong Are you comparing a lot of np.arrays? (Could you have arrays of > 1 length?) I'd like to think about the use case a bit more if you have thoughts.
My use case was reading CSV files with empty arrays using pandas -- pandas will read arrays as numpy arrays and two empty numpy arrays cannot be equal
Makes sense. I think the issue boils down to how Pandas internalises the dtype for an array. It will be an O
(Object). But so is string, or another other items which isn't a base dtype that Pandas supports. We could check the first item of that columns and check to see if it is a np.ndarray
but that seems really ugly to me. I'm open to other thoughts or suggestions. For your use case will it always be empty?
In my case it wasn't always empty.
Another option could be to convert them to Python lists instead of np.ndarray
@theianrobertson Thoughts on this issue? Dataframes with numpy arrays in columns.
So what Simon really wants is elementwise comparison of the arrays, right?
Yup that’s exactly it. Not an actual empty data frame.
On Sun, May 17, 2020 at 11:56 AM Jan Borchmann [email protected] wrote:
So what Simon really wants is elementwise comparison of the arrays, right?
— You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub https://github.com/capitalone/datacompy/issues/58#issuecomment-629811427, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACPJA5T7DIWYEVLRBRZLRNLRR73LBANCNFSM4LFGD7SQ .
I guess we would want to use something like the numpy function to compare arrays.
Yeah, on non-empty arrays it'll do a normal element wise comparison, but empty np.array
s will never be equal
The main issue is detecting if it is a np array. Since it comes up as an object in pandas for the dtype. We’d need to inspect the actual element to know for sure what we are dealing with. But then the questions comes up of we should support other things.
On Sun, May 17, 2020 at 12:25 PM Simon Wong [email protected] wrote:
Yeah, on non-empty arrays it'll do a normal element wise comparison, but empty np.arrays will never be equal
— You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub https://github.com/capitalone/datacompy/issues/58#issuecomment-629815499, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACPJA5SOHKSNZUMZXZZQCT3RR76W5ANCNFSM4LFGD7SQ .
Yeah, on non-empty arrays it'll do a normal element wise comparison, but empty
np.array
s will never be equal
If you look at my above example I’m not sure datacompy will automatically work. It will complain and suggest any or all.
@jborchma Any further thoughts on this. I think the main issue is where and if you draw the line of things to compare.
@simonwongwong This was a while back now. I'm going to close this issue, but feel free to reopen if it seems like something we need to rehash. Trying to organize our backlog and work through some of these older issue if needed.
Hi everyone! This is a feature we're missing and I'm happy to spend some time implementing a solution (and also coming up with a proposal how to move forward, if you want).
Hey @jonashaag yes please. Would love contributions and thoughts from others. Happy to have you take this on. Appreciate you willing to help out. 🚀
Had a look into the implementation -- the actual column comparison code (def columns_equal
) seems rather unflexible/specifically built for use cases at Capital One. Here's two ideas how to deal with the NumPy array issue:
A) Add new fixed logic for NumPy arrays: try to detect NumPy array columns by looking at the actual series values. Use .all()
for NumPy arrays.
B) Add a new system for custom declaration of "comparators", ie. give more flexibility to the user to configure how columns are compared. We would ship a default configuration that mimics the current behavior, and users would be free to change the configuration to their liking. This could be as simple as giving a list of comparators that are tried in order until one of them "understand" the data, ie. the user could pass something like:
columns_equal(..., comparators=[
FloatComparator(rtol=1e-3),
StringComparator(case_sensitive=False),
ArrayComparator(aggregate="all") # calls .all()
])
Or it could be an explicit list of comparators for each column, or something similar.
@jonashaag ill take a look at this on Monday. Been on vacation all week. Thanks for your help with this. I do think datacompy could be ready for a major refactor to be honest. Especially aligning the spark and pandas APIs
Reopening this issue. I like the idea of option B @jonashaag . But that seems like a bit of a refactor and something I've been thinking about with the package. I'd like to revisit it and see if there are opportunities to one make it more flexible and also play nicer with Spark/Pandas all in one spot. I was thinking maybe koalas might be a good option here. Option A would be the quickest and solve this direct issue immediately it seems.
Thoughts? @jonashaag @jborchma @elzzhu @theianrobertson ?
I have little experience with Spark and I'm not sure if I'll be able to invest the learning time right now.
That is perfectly fine, that is something I can lean into.