nalgebra
nalgebra copied to clipboard
Custom floating point type for Matrices/Vectors
Hi,
I'm trying to use custom floating-point types (essentially a wrapper that tracks some additional information) but can't seem to figure out which trait I should implement that my Foo
type can then be used in a Matrix/Vector type. Ideally, I would like the following to be possible:
#[derive(Debug, Clone, Copy)]
pub struct Foo<T>(T);
fn main() {
type Vector3Foo = Vector3<Foo<f32>>;
let x = Vector3Foo::new(Foo(2.0), Foo(2.0), Foo(2.0));
let b = x * 2.0;
println!("{}", b);
}
For the sake of simplicity, I just focus on the Mul trait here by implementing the following traits:
impl<T: Scalar> PartialEq for Foo<T> {
fn eq(&self, other: &Self) -> bool {
self.0 == other.0
}
}
impl<T: Scalar + Mul<Output = T>> std::ops::Mul<Foo<T>> for Foo<T> {
type Output = Foo<T>;
fn mul(self, rhs: Foo<T>) -> Self::Output {
Foo(self.0 * rhs.0)
}
}
impl<T: Scalar + Mul<Output = T>> std::ops::Mul<T> for Foo<T> {
type Output = Foo<T>;
fn mul(self, rhs: T) -> Self::Output {
Foo(self.0 * rhs)
}
}
impl Mul<Foo<f32>> for f32 {
type Output = Foo<f32>;
fn mul(self, rhs: Foo<f32>) -> Self::Output {
Foo(self * rhs.0)
}
}
The above main (i.e. the multiplication by 2.0) still doesn't compile. Is there a way to achieve what I want without creating yet another wrapper around the Vector3Foo
and reimplementing all the Mul traits?
Best, Philippe
Hi! The best you can do is to also implement MulAssign
for Foo
and perform your multiplication using let b = x * Foo(2.0);
:
impl std::ops::MulAssign<Foo<f32>> for Foo<f32> {
fn mul_assign(&mut self, rhs: Foo<f32>) {
self.0 *= rhs.0;
}
}
fn main() {
type Vector3Foo = Vector3<Foo<f32>>;
let x = Vector3Foo::new(Foo(2.0), Foo(2.0), Foo(2.0));
let b = x * Foo(2.0);
println!("{:?}", b);
}
nalgebra
does not have any implementation of impl Mul<T1> for Matrix<T2, R, C>
when T1
and T2
are different types (Foo<f32>
and f32
in your case), even if they are multipliable. I don’t think that this kind of implementation can be added at all (perhaps that could be possible with specialization?) because that could lead to conflicting impls with matrix multiplication (it would conflict with the impl Mul<Matrix<T, R, C>> for Matrix<T, R, C>
).
Also note that, because of orphan rules, it is also not possible for nalgebra
to support let b = Foo(2.0) * vector
(with custom scalar type on the left-hand-side of the multiplication). Scalars on the left-hand-side are only supported for the primitive types.
I see, then I suppose I'll have to re-implement my own wrapper on the type to allow this kind of manipulation. Thanks for the quick answer!
After playing a bit around, I found an approach that might not require specialisation and allow that kind of manipulation. The idea is similar to how Path/PathBuf are handled in public interfaces, i.e. by using a template parameter P: Into<PathBuf>
and then calling into() whenever necessary.
This would require a few changes in nalgebra, but it might allow for smoother integration in other libraries/software. I think this is best shown with an example; for example, the component-wise construction of vectors would become :
macro_rules! componentwise_constructors_impl(
($($R: expr, $C: expr, [$($($args: ident),*);*] $(;)*)*) => {$(
impl<T> Matrix<T, Const<$R>, Const<$C>, ArrayStorage<T, $R, $C>> {
/// Initializes this matrix from its components.
#[inline]
#[allow(clippy::too_many_arguments)]
pub fn new<F:Into<T>>($($($args: F),*),*) -> Self {
unsafe {
Self::from_data_statically_unchecked(
ArrayStorage(
transpose_array![
$(
$($args.into()),*
;)*
]
)
)
}
}
}
)*}
);
While I have not seen any performance drops for primitive types, it has the downside that the function cannot be const anymore. This allows to write x
from above simply as let x = Vector3Foo::new(2.0, 2.0, 2.0);
if Foo
implements From<T>. The same could be done for Mul/Add/... operators allowing for a much cleaner/understandable code when using custom floating-point data structures.
What do you think? Is this something that could be of interest at some point for nalgebra?
I’m not sure I understand how this would solve the problem with Mul/Add
etc: we can’t have both impl Mul<T1> for Matrix<T2, R, C>
and impl Mul<Matrix<T1, R, C>> for Matrix<T2, R, C>
because they would conflict, yet we need to have distinct impls because multiplication by a scalar has a different semantic and different requirements than multiplying two matrices.
I was thinking of this for example for left scalar multiplication:
impl<T, R: Dim, C: Dim, S: Storage<T, R, C>> Mul<Matrix<T, R, C, S>> for $T
where
DefaultAllocator: Allocator<T, R, C>,
T : MulAssign + Scalar + From<$T>
{
type Output = OMatrix<T, R, C>;
#[inline]
fn mul(self, rhs: Matrix<T, R, C, S>) -> Self::Output {
let mut res = rhs.into_owned();
// XXX: optimize our iterator!
//
// Using our own iterator prevents loop unrolling, which breaks some optimization
// (like SIMD). On the other hand, using the slice iterator is 4x faster.
// for rhs in res.iter_mut() {
for rhs in res.as_mut_slice().iter_mut() {
*rhs *= self.into()
}
res
}
}
Where $T represents the different primitive types (f32,f64,i32,...). It's straightforward for left multiplication, but I see that it would require quite a bit of rewriting to have a similar approach for right-hand side multiplication (since this is implemented generically and not on primitive types only in algebra).
At least this would allow to write let b = 2.0 * Vector3Foo::new(2.0, 2.0, 2.0);