MulAssign on MatrixViewMut does not work.
I'm trying to run the following:
use std::ops::MulAssign;
use nalgebra::Const;
use nalgebra::DMatrix;
use nalgebra::Matrix;
fn main() {
let identity = nalgebra::Matrix4::<f32>::identity();
let mut matrix = Matrix::<f32, Const<8>, Const<4>, _>::zeros();
let mut view = matrix.view_mut((0, 0), (5, 4));
view.mul_assign(identity); // <-- doesn't work
let mut points = DMatrix::zeros(8, 5);
let mut view = points.view_mut((0, 0), (5, 4));
view.mul_assign(identity); // <-- works
}
But the first example doesn't work and gives the following error:
error[E0308]: mismatched types
--> src/main.rs:12:21
|
12 | view.mul_assign(identity); // <-- doesn't work
| ---------- ^^^^^^^^ expected `f32`, found `Matrix<f32, Const<4>, Const<4>, ArrayStorage<f32, 4, 4>>`
| |
| arguments to this method are incorrect
|
= note: expected type `f32`
found struct `Matrix<f32, Const<4>, Const<4>, ArrayStorage<f32, 4, 4>>`
note: method defined here
--> /home/hgaiser/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/arith.rs:877:8
|
877 | fn mul_assign(&mut self, rhs: Rhs);
| ^^^^^^^^^^
The DMatrix works, but not in a way you expect. It is easy to print types of your variables:
matrix: nalgebra::base::matrix::Matrix<f32, nalgebra::base::dimension::Const<8>, nalgebra::base::dimension::Const<4>, nalgebra::base::array_storage::ArrayStorage<f32, 8, 4>>
points: nalgebra::base::matrix::Matrix<nalgebra::base::matrix::Matrix<f32, nalgebra::base::dimension::Const<4>, nalgebra::base::dimension::Const<4>, nalgebra::base::array_storage::ArrayStorage<f32, 4, 4>>, nalgebra::base::dimension::Dyn, nalgebra::base::dimension::Dyn, nalgebra::base::vec_storage::VecStorage<nalgebra::base::matrix::Matrix<f32, nalgebra::base::dimension::Const<4>, nalgebra::base::dimension::Const<4>, nalgebra::base::array_storage::ArrayStorage<f32, 4, 4>>, nalgebra::base::dimension::Dyn, nalgebra::base::dimension::Dyn>>
points is a matrix where each element is a Matrix4<f32> and not f32 as you might expect. So, in both cases mul_assign() is a Matrix x Scalar multiplication. If you change the second example to DMatrix::<f32>::zeros(8, 5), both will fail in a same way.
If my understanding of the relevant nalgebra parts is correct, the actual problem is that mul_assign() requires underlying storage to implement IsContiguous, but ViewStorageMut implements it only for 1-column views.
There is a comment about removing the IsContiguous requirement for MulAssign implementations. I am not sure why it is still there.
The REAL problem is in the DefaultAllocator binding. It can be "fixed" like this:
diff --git a/src/base/ops.rs b/src/base/ops.rs
index 34ef59f22c6f..c71691c742e1 100644
--- a/src/base/ops.rs
+++ b/src/base/ops.rs
@@ -17,7 +17,6 @@ use crate::base::dimension::{Dim, DimMul, DimName, DimProd, Dyn};
use crate::base::storage::{Storage, StorageMut};
use crate::base::uninit::Uninit;
use crate::base::{DefaultAllocator, Matrix, MatrixSum, OMatrix, Scalar, VectorView};
-use crate::storage::IsContiguous;
use crate::uninit::{Init, InitStatus};
use crate::{RawStorage, RawStorageMut, SimdComplexField};
use std::mem::MaybeUninit;
@@ -634,13 +633,16 @@ where
R2: Dim,
T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign,
SB: Storage<T, R2, C1>,
- SA: StorageMut<T, R1, C1> + IsContiguous + Clone, // TODO: get rid of the IsContiguous
+ SA: StorageMut<T, R1, C1>,
ShapeConstraint: AreMultipliable<R1, C1, R2, C1>,
- DefaultAllocator: Allocator<R1, C1, Buffer<T> = SA>,
+ DefaultAllocator: Allocator<R1, C1>,
{
#[inline]
fn mul_assign(&mut self, rhs: Matrix<T, R2, C1, SB>) {
- *self = &*self * rhs
+ let product = &*self * rhs;
+ self.iter_mut()
+ .zip(product.iter())
+ .for_each(|(a, b)| *a = b.clone());
}
}
@@ -651,14 +653,17 @@ where
R2: Dim,
T: Scalar + Zero + One + ClosedAddAssign + ClosedMulAssign,
SB: Storage<T, R2, C1>,
- SA: StorageMut<T, R1, C1> + IsContiguous + Clone, // TODO: get rid of the IsContiguous
+ SA: StorageMut<T, R1, C1>,
ShapeConstraint: AreMultipliable<R1, C1, R2, C1>,
// TODO: this is too restrictive. See comments for the non-ref version.
- DefaultAllocator: Allocator<R1, C1, Buffer<T> = SA>,
+ DefaultAllocator: Allocator<R1, C1>,
{
#[inline]
fn mul_assign(&mut self, rhs: &'b Matrix<T, R2, C1, SB>) {
- *self = &*self * rhs
+ let product = &*self * rhs;
+ self.iter_mut()
+ .zip(product.iter())
+ .for_each(|(a, b)| *a = b.clone());
}
}
But it seems that such a fix will degrade performance for large dynamic matrices: code will copy all elements instead of just swapping two vectors.
I think, a proper fix requires the introduction of a trait with a function to move/replace all elements in fastest possible way for a given storage type pair.