phidl
phidl copied to clipboard
Implement a more robust rounding algorithm (no magic number)
I have checked this on large designs and found this to work more robustly.
The previous algorithm failed when creating devices on different machines, where the arrays were basically identical, but the rounding ensured that the values changed.
Let me know if you are interested in merging this PR, then I will update the hashes in the tests.
I like the idea of a more robust hashing algorithm, but can you explain what's different here? As far as I can tell, the only difference is that it's using np.array.round(points)
instead of implicit rounding by casting the array type to Int64. I don't think there's a meaningful difference between the two operations, are you sure this is really more robust? Are there any tests you can demonstrate to show that it is?
This code also has the side-effect that it's only possible to use precisions which are 10^x (0.1, 1e-4, etc). I don't think that's a huge deal really but it is a reduction in functionality
I have tried this on our internal code base and noticed it produced more reliable hashing when making changes to the environments.
I will try to come up with a simple reproducible example and test (soonish).
How about rounding to 1nm precision?