impl for ArrayBase instead of primitives?
I have noticed that the various linear algebra routines are implemented for primitive types rather than for array types, i.e. ArrayBase, in this case. What is your reasoning behind that?
In terms of types I would think that the routines should act on ArrayBase. But since they are implemented for the primitives, one gets the impression that they act on, well, the primitives.
I've been meaning to experiment with it more, but I agree that providing methods on the matrices themselves is probably a cleaner interface.
My original thinking was that providing the traits on scalars would make it easier to write generic code, but that might have been a misconception based on my understanding of traits at the time.
In the matrix-trait branch, I'm experimenting with a LinxalMatrix trait that adds methods to ArrayBase to implement all of the functionality provided in linxal. Right now it's based on the compute style methods. I plan to add a LinxalOwnedMatrix trait that would implement compute_into style methods as well, but I wanted to get a feel for how the API would look.
Please let me know what you think.
I like that there is now a unified interface to all the matrix routines.
If I see things correctly, you still implement the various operations for the PODs. Why go this route instead of implementing, e.g., impl SVD for ArrayBase<S,D> and then specify the underlying primitives via S: Data<Elem=$ty>, for example?
EDIT: Not necessarily the same, but similarly to the way ndarray implements the Dot trait? https://github.com/bluss/rust-ndarray/blob/master/src/linalg/impl_linalg.rs#L192
ndarray delegates to the right blas methods at runtime, but instead of A: LinAlgScalar one could impl this for Elem=f32 and Elem=f64.
I'm still finalizing the interface (I haven't added mutable or consumable versions yet, for instance), so even after I release it I would still anticipate having the scalar traits around for a version or two while I de-emphasize them in the docs and possibly deprecate them.
I do want to make sure that it is very easy to write application code that generalizes across scalar types. That's why a trait like LinxalScalar will be used even after a potential shift to LinxalMatrix and its variants; defining directly in terms of the f32, etc would make that less possible, I believe, but it requires more experimentation on my part.
Version 0.6.0 covers provides LinxalMatrix and LinxalMatrixInto traits to provide computational routines on matrices themselves.