num-bigint
num-bigint copied to clipboard
Reconsider PartialEq and PartialOrd with primitives
This afternoon I tried updating some of my numerical code (Project Euler stuff) to use the master branch of num-bigint
for pre-release testing. Some of the changes needed are expected, but the one that's really killing me is the expanded PartialEq
and PartialOrd
(#105/#136). This broke type inference in many places, even where num-bigint
wasn't used at all!
I have a common euler
crate with utility stuff, which does pull in num-bigint
. However, this problem p082
doesn't use anything with bigint, yet it's affected:
error[E0283]: type annotations needed for `&ndarray::ArrayBase<ndarray::data_repr::OwnedRepr<u32>, ndarray::dimension::dim::Dim<[usize; 2]>>`
--> problems/p082/src/main.rs:44:29
|
37 | let weights = &euler::square_from_iter(
| ------- consider giving `weights` the explicit type `&ndarray::ArrayBase<ndarray::data_repr::OwnedRepr<u32>, ndarray::dimension::dim::Dim<[usize; 2]>>`, where the type parameter `usize` is specified
...
44 | assert!(weights.nrows() < MAX.into());
| ^ cannot infer type for type `usize`
|
= note: cannot resolve `usize: std::cmp::PartialOrd<_>`
That's a bad error about weights
-- the problem is really in the MAX.into()
type, which previously inferred correctly from u8
to usize
. I may try to reduce a test case of that bad message for a rustc bug. But I'm really concerned that we've affected type inference from afar.
cc @hansihe @birkenfeld -- have you tried using master num-bigint
on any big projects?
(Can't say I have big projects using num-bigint
, so I'm not qualified to push any way forward here.)
That certainly looks nasty - makes you wish for binary operation traits to have defaults for inferring RHS == LHS types... This basically makes it impossible to implement nice interoperability between builtin and third-party numeric types.
I was trying to reconcile why this wasn't so bad with other binary operators, like Add
, but I think it's because those already can't be inferred with just core
alone. The primitive integers all implement Add
with value or reference RHS already, which makes x + y.into()
ambiguous in a way that the compiler won't even try to solve it. Adding bigints into that mix doesn't make that any worse.
Whereas with PartialEq
and PartialOrd
, core
only implements the primitive integers with themselves, so type inference has an immediate solution -- until we disrupt that with bigints.
I'd like to find a compromise where we can solve the rough goal -- comparing bigints with primitives without converting (and allocating) the latter. Maybe we could add our own BigOrd
trait with distinct methods? (big_cmp
, big_eq
, etc.) This wouldn't be as nice as having the real comparison operators, but it would avoid the far-reaching disruption to type inference.
Interesting issue. Why are you not concerned about the .into()
calls? For me those are always to avoid in non-generic contexts (they should be use for argument conversion, is the thinking then).
The conclusion is probably good anyway, I'm just making noise about my aversion to naked .into()
's.
Why are you not concerned about the
.into()
calls?
In comparison context? I guess I was not concerned because it was working fine. Maybe that was lazy of me, but still, that code broke here. I also had some comparisons with sum()
that became ambiguous here -- again this might be lazy not to use an explicit type, but it was working.
I think it's pretty idiomatic to prefer type inference over explicit types when you can get away with it.
It is idiomatic but using AsRef and Into outside of argument conversion has always been a trap for code that will break due to type inference in this way. It is not localized here, it's a general problem with those traits. IMO it's always been wrong to use them like this, but I can't turn the tide. For example a failed attempt to turn the tide: :) https://github.com/rust-lang/rust/issues/36443
Would it make sense to also remove other operators (+, *, etc) between BigInt/BigUint and primitive types? This would be consistent with they are not allowed between different primitive types.
To avoid losing performance, BigUint representation could be special-cased for a single BigDigit so that conversions are very cheap.
Argh, I was just about to open a PR adding those implementations and I've stumbled upon this issue.
Is there anything that can be done here? I'd rather not allocate a vec just to compare with a u32
,
It sounds like a bigger rust problem with how much we try to preserve type inference,
IMHO breaking a generic .into()
is fine and where things are ambiguous Ty::from(...)
should be used.