Poor choice of default tolerance in SVD computation
Currently, the tolerance in the SVD defaults to a small multiple of approx::AbsDiffEq::default_epsilon(). See:
https://github.com/dimforge/nalgebra/blob/d055f22/src/linalg/svd.rs#L98
The problem is that this parameter is badly and confusingly named (see approx discussion): default_epsilon() is not related to the machine epsilon, i.e., it is not the relevant relative precision, but it relates to a default tolerance in the absolute difference. The proper method is instead, the again not terribly well-named, approx::RelativeEq::default_max_relative().
What makes matters worse is that there is usually no sensible default for absolute tolerance, as it depends on the scaling of your problem, which is why most float comparisons sets it to zero. However, in their default implementation for f32 and f64, approx fills in machine epsilon for both default absolute and relative tolerance, thereby further blurring the distinction between the two. We instead -- properly -- set it to zero in the xprec package, which then breaks the SVD here.