deepdiff
deepdiff copied to clipboard
DeepHash fails on 0-d-np.array
I do like DeepDiff/DeepHash really!
But today I think I found an error:
import numpy as np
from deepdiff import DeepHash
a=np.array(2)
DeepHash(a)
Traceback (most recent call last):
File "/home/volker/workspace/PYTHON5/zeiss_caching/zeiss_caching/sandbox.py", line 23, in <module>
DeepHash(a)
File "/home/volker/workspace/venvs/zeiss-caching-O3myIkpD-py3.9/lib/python3.9/site-packages/deepdiff/deephash.py", line 190, in __init__
self._hash(obj, parent=parent, parents_ids=frozenset({get_id(obj)}))
File "/home/volker/workspace/venvs/zeiss-caching-O3myIkpD-py3.9/lib/python3.9/site-packages/deepdiff/deephash.py", line 483, in _hash
result, counts = self._prep_iterable(obj=obj, parent=parent, parents_ids=parents_ids)
File "/home/volker/workspace/venvs/zeiss-caching-O3myIkpD-py3.9/lib/python3.9/site-packages/deepdiff/deephash.py", line 383, in _prep_iterable
for i, item in enumerate(obj):
TypeError: iteration over a 0-d array
Expected behavior A hash should be created
OS, DeepDiff version and Python version (please complete the following information):
- OS: Debian Bullseye
- Python: 3.9.2
- DeepDiff: 5.8.1
Additional Context DeepDiv also behaves weird with 0-d-arrays
a=np.array(2)
b=np.array(2)
print(DeepDiff(a, b))
{}
a=np.array(2)
b=np.array(3)
print(DeepDiff(a, b))
Traceback (most recent call last):
File "/home/volker/workspace/venvs/zeiss-caching-O3myIkpD-py3.9/lib/python3.9/site-packages/deepdiff/diff.py", line 296, in __init__
self._diff(root, parents_ids=frozenset({id(t1)}), _original_type=_original_type)
File "/home/volker/workspace/venvs/zeiss-caching-O3myIkpD-py3.9/lib/python3.9/site-packages/deepdiff/diff.py", line 1348, in _diff
self._diff_numpy_array(level, parents_ids)
File "/home/volker/workspace/venvs/zeiss-caching-O3myIkpD-py3.9/lib/python3.9/site-packages/deepdiff/diff.py", line 1223, in _diff_numpy_array
self._diff_iterable_in_order(new_level, parents_ids, _original_type=_original_type)
File "/home/volker/workspace/venvs/zeiss-caching-O3myIkpD-py3.9/lib/python3.9/site-packages/deepdiff/diff.py", line 669, in _diff_iterable_in_order
for (i, j), (x, y) in self._get_matching_pairs(level):
File "/home/volker/workspace/venvs/zeiss-caching-O3myIkpD-py3.9/lib/python3.9/site-packages/deepdiff/diff.py", line 619, in _get_matching_pairs
return self._compare_in_order(level)
File "/home/volker/workspace/venvs/zeiss-caching-O3myIkpD-py3.9/lib/python3.9/site-packages/deepdiff/diff.py", line 603, in _compare_in_order
zip_longest(
TypeError: iteration over a 0-d array
Cheers, Volker
@volkerjaenisch Thanks for reporting the bug. I will take a look at it when I have a chance. Of course PRs are very welcome if you would like to contribute a solution.
dear Sep!
Thank you for digging into this bug. I am handycapped by a montainbike accident and will (typing with left) not come up with a PR soon.
Cheers,
Volker
On 14.08.22 08:29, Sep Dehpour wrote:
@volkerjaenisch https://github.com/volkerjaenisch Thanks for reporting the bug. I will take a look at it when I have a chance. Of course PRs are very welcome if you would like to contribute a solution.
— Reply to this email directly, view it on GitHub https://github.com/seperman/deepdiff/issues/332#issuecomment-1214295203, or unsubscribe https://github.com/notifications/unsubscribe-auth/AATL6SBZJMK6MCADTKEG473VZCG5XANCNFSM55MQOLBQ. You are receiving this because you were mentioned.Message ID: @.***>
oh, sorry to hear that. That sucks. I hope you will heal soon and be back on the bike.
Thanks for your wishes!
Your code is crucial for one of our really important customers. Currently this bug is not surfacing in development, but in one of our tests. And to be true it was not the test but an error in the test that triggered the 0-d-array.
So this may be a corner case never surfacing. I thought it a good idea to mention it - for your completeness.
So keep your hair on and do not rush. And enough of left hanf typing for today.
Cheers,
Volker
Message ID: @.***>
I am also hitting the same issue while trying to calculate DeepHash of an model object during training.
Until a formal fix is released, is there is a quick hack that I could apply on my side to solve this issue @seperman ?