flint icon indicating copy to clipboard operation
flint copied to clipboard

Various new p-adic implementations

Open fredrik-johansson opened this issue 2 months ago • 5 comments

Two problems with the current padic type that I've discussed with @r-mb at the workshop:

  • It uses a fixed-point notion of precision, but in some applications floating-point precision is better (see https://xavier.caruso.ovh/papers/publis/course-padic.pdf for a nice overview of various precision models).
  • It doesn't do error tracking.
  • It's based on somewhat naive fmpz arithmetic, not taking advantage of all our nice code for modular arithmetic with precomputed inverses etc.

I think it would be best to add a family of new p-adic implementations, using the generics system so that there is just one interface. I have in mind at least the following:

Floating-point arithmetic: elements have the form $p^{exp} \cdot man$ with $man$ a unit reduced mod $p^n$ where $n$ is the (relative) precision.

padic_float_nmod padic_float_mpn padic_float_fmpz

These would respectively use nmod, mpn_mod and fmpz_mod arithmetic. Unlike the current padic type, precision would be stored in the context object rather than in the elements, so an element would just be a tuple man, exp (with exp of type slong).

Then for ball arithmetic: elements have the form $p^{exp} \cdot man + O(p^{err})$:

padic_ball_nmod padic_ball_mpn padic_ball_fmpz

A ball would be a tuple man, exp, err (with err an slong).

Question: should we have both floating-point and fixed-point p-adics, and both approximate and ball versions of each one?

Here is a very minimal draft of a padic_float_nmod implementation that allows multiplying two numbers: https://gist.github.com/fredrik-johansson/fc2b52fae116218ae37ca3567fdd503c

It does a simple multiplication 4x faster than padic_mul so this kind of specialization clearly looks worth it from a performance point of view (I expect even greater speedups in some other cases).

fredrik-johansson avatar Oct 29 '25 14:10 fredrik-johansson

Another detail: Xavier defines p-adic floating-point systems to include NaN and Inf values. This looks nonessential since the generics system allows robust error handling. We could do Inf and NaN if it's something people strongly care about, but personally I would prefer to leave them out.

fredrik-johansson avatar Oct 29 '25 14:10 fredrik-johansson

I just did a quick and dirty try with addition, and the padic_float_nmod approach seems twice faster.

r-mb avatar Oct 29 '25 17:10 r-mb

Not much too add except for saying, that I (and I know also @fieker) would love to see a ball version of padics with proper ball semantics. The current padic implementation is a bit tricky to use for rigorous computations.

Do do you plan to have "exact" values? I think curently -1 cannot be represented exactly because of the choice of nonnegative residue system.

thofma avatar Oct 29 '25 18:10 thofma

I am not sure I agree. flint has the floating model, numbers are p^n * u u a unit up to some precision. all operations in flint are returning the optimal, guaranteed, precision - its just not a lot of operations. There are different issues:

  • the arithmetic is implemented suboptimally (not using the preinverse, ....)
  • there are not a lot of operations, in particular lin. algebra is missing
  • the model for units is bad, I'd rather use a symmetric residue system to be able to deal with negative numbers. In particular if I want ramified extensions with "exact", "small" coefficients, they need to be signed. The basis for fast computation of exp/log in ramified extensions is to define them using polynomials where the coefficients are small integers.

The p-adics are "easy" the q-adics and the (missing in flint) ramified ones are more fun.

For applications, it is important to be able to change precision at will, e.g. when implementing Newton/ Hensel stuff

From memory and the Hecke issues and such: Questions for e.g. gcdx(a, b):

  • the gcd is easy: p^n for n the minimal valuation. Precision also based on the prec. of a and b
  • the cofactors are hard: anything that transforms (a, b) into g? the precision can be arbitrary since ea+fb will reduce the precision automatically. The cofactors are never unique, not even in exact arithmetic. This matters for the HNF computation. If the cofactors can be exact, then the HNF can be returned at maximal guaranteed precision. If the cofactors are low-precision, the linear algebra looses a lot of precision.

What is the scope/ aim?

fieker avatar Oct 29 '25 19:10 fieker

Flint's padic_t has a fixed-point model in the sense that precision is absolute rather than relative: if you set the precision of the output variable to 10, then $p^2 u$ will have $u$ rounded to 8 digits of precision while $p^{-2} u$ will have u rounded to 12 digits of precision. I think there are situations where you want either model (but I'm not the expert here). @r-mb is interested in doing lots operations with mostly single word precision (e.g. mod 13^10) which is why we started looking at an nmod based floating-point representation.

The plan is definitely to have ball versions with proper ball semantics.

Indeed, I had completely forgotten about the issue with negative units which is an interesting one. We definitely want to be able to have exact balls for exactly representable numbers, so I would tend to agree that a balanced representation makes most sense at least for balls.

Regarding changing precision at will: I think we will basically do as for arb or gr_series where the target precision is stored in the context object (or passed as an argument to functions, where applicable) rather than being stored in the output variable. At least for the fmpz based representation, you should be able to mix operands computed at different precision. This means some fmpz_mod-based optimizations won't be applicable in all cases, but the flexibility is probably worth it. For nmod and mpn_mod representation it's less clear that this makes sense than for fmpz, but maybe in a context where the representation can go up to $p^N$, we might admit any precision $1 \le n \le N$.

q-adics and ramifieds I can't say anything about.

fredrik-johansson avatar Oct 29 '25 22:10 fredrik-johansson