Faisal
Faisal
I'm personally leaning towards the class implementation. To me it seems a bit cleaner and more compartmentalized.
Some other things we've been thinking about and discussing: - Dropping fugue support, and by extension: dask, duckdb, ray. Not seeing a lot of usage or support for some of...
Documenting some of the changes I'm seeing so far: - https://spark.apache.org/docs/4.0.0/sql-ref-ansi-compliance.html#cast (`spark.sql.ansi.enabled: false` needs to be set it seems)
@shreya-goddu So this is a known issue (point 2). Mainly because with the legacy SparkCompare you are using [drops duplicates](https://github.com/capitalone/datacompy/blob/afeed8b5aa7160b1ed4b3b6aeaa73988a1af2fe8/datacompy/legacy.py#L204-L206) before it does anything. This was one of the reasons...
For point 1: For array types I don’t know if we support those in terms of our compare logic. So it isn’t surprising it says `[1,2]` doesn’t equal `[2,1]`. This...
I'm closing this issue as the behaviour is expected. If you feel like you'd like to discuss further please re-open and tag me.
@achrusciel thanks for the feature request. Happy to accept a PR if you have the time to contribute? I can also take a closer look at this in a bit...
FYI, we have a PR for snowpark support #333 Another option I was thinking about was looking into https://ibis-project.org/ (cc: @gforsyth 😄 )
#333 has been merged. Next release should have support.
@rhaffar We will need to exclude the test_snowflake.py from the pytest calls since it will cause everything to fail. Need to add `--ignore=path/to/file.py` when calling pytest in github actions.