Python array API support
The aim of this pull request is to make ODL multi backend through the Python Array API.
Would it be a lot of work to put the weighting hierarchy in its own PR? I think there are a bunch of questions to be discussed there, and it should be checked thoroughly whether this really works, independent of the Python Array API generalizations.
What certainly needs clarification (if not changes) is that about the methods being "mutually exclusive". It makes some sense from the definition side (we don't want conflicting semantics between e.g. distance and norm). But surely, from the use side we should then have all of them available, as far as mathematically possible? In the most typical use case (say, $L^2$) they can all be derived from the inner product, so it should be sufficient to provide only that, or alternatively the weights (which can then be used to implement first the inner product via array-API and then everything else from that).
But then there are also cases (Banach spaces) where there really is no inner product, and we'd need either weights and and exponent, or a custom norm, which also gives rise to a distance. In principle we could have only a distance and nothing else – a metric space – but I'm not sure if that would ever crop up, given that the TensorSpace class already requires a vector-space structure.
About the fact that the weighting refactoring should be in its own PR. I chose to add it here as for me, the python array API allows decoupling the backend classes (which will only add a few attributes to the Weighting) and the abstract class. As i am doing it for the TensorSpace and TensorSpaceElement classes i thought it would fit :) As for the behaviour, I am not adding anything: the classes are refactored but the functionality remain unchanged. I'm happy to open the discussion on the Banach spaces and agree to create a separate PR for that :)
I did not mean to click "ready for review"
the classes are refactored but the functionality remain unchanged
Well, for one thing, I don't think the current state actually works correctly for a simple inner-product space. In the inner branch of the initialization, you're only overriding the __inner attribute, but __norm stays at its default value. Thus calling norm on that weighting will go through the default unit weight and array-norm, which will in general be inconsistent with the custom inner product.
Pretty sure all that could be fixed, but the thing is, it's not so clear-cut to say the functionality remains unchanged (and whether that even makes sense) given that the decision logic is set up in a different way now, with a dict-of-methods instead of inheritance. And this would be best investigated in a separate PR.
Okay, got you :) I should have checked the behaviour better.
However i don't really understand the point about splitting the PRs. I looked at your pytorch backend PR: the commits are mixed and i think it's still readable. I am also not sure that you ran the tests that would have spotted the errors that we all do while coding. I told you this is a WIP, you told me to make a PR and now you say that it's not thoroughly checked.
I guess we should align in terms of how we push changes to this repo. How about discussing live on Friday?
No critique meant. We're moving in the right direction, I just keep underestimating the amount of individual changes and possible implications. The more we separate concerns, the better we can ensure that each change is safe and won't cause more problems further down the road than it fixes.
- Deprecating the test that relied on Numpy mechanisms to perform arithmetic operations between product spaces and lists. In my opinion, this should not be supported. It seems that nowhere in the codebase this behaviour is relied on. What does @leftaroundabout thinks about it?
I suppose you mean test_unary_ops and test_operators? (BTW commenting out a test case isn't really "deprecating" it)
Well, we certainly want -x and x + y and so on to work when x and y are elements of a pspace. This has nothing to do with NumPy, and it should be tested for both NumPy and PyTorch.
Sure, deprecating is not the right word, you know what i mean though.
perform arithmetic operations between product spaces and lists so not when x and y are elements of a pspace.
perform arithmetic operations between product spaces and lists so not when x and y are elements of a pspace.
Ah, like x + [4,5,6] for x∈ℝ³×ℝ³×ℝ³? Where exactly is/was that tested?
I'm not sure if it should be supported. I tend to say no, it would be better to require it to be written x + my_pspace.element([4,5,6]).
perform arithmetic operations between product spaces and lists so not when x and y are elements of a pspace.
Ah, like
x + [4,5,6]forx∈ℝ³×ℝ³×ℝ³? Where exactly is/was that tested?I'm not sure if it should be supported. I tend to say no, it would be better to require it to be written
x + my_pspace.element([4,5,6]). See commit 2e59ad3da7ac198f3688dee868eed38036e5b765 :)
See commit https://github.com/odlgroup/odl/commit/2e59ad3da7ac198f3688dee868eed38036e5b765 :)
Ah, now I get it! Yes, y_arr = [op(x_) for x_ in x_arr] is much better than the old version.
An updated, ready-to-merge version of this PR is now https://github.com/odlgroup/odl/pull/1697, which contains all of the changes originally proposed here. I am closing this PR in favour of that.