jump to navigation

Monsieur, x^p = x (mod p), donc Dieu existe. Repondez! March 12, 2011

Posted by Alexandre Borovik in Uncategorized.
13 comments

I took liberty of copying it from Foundations of Mathematics Digest. A post by Daniel Mehkeri:

Monsieur, x^p = x (mod p), donc Dieu existe. Repondez !

In _Constructivism in Mathematics_, Troelstra and van Dalen
famously write,

…we do accept that “in principle” we can view 1010^10 as a sequence of units (i.e. we reject the ultrafinitist objection), and the authors are not sure that this is really less serious than the platonist extrapolation. At least it needs arguments.

I would like to make such an argument. I think it is new: at least, a shallow search fails to turn anything up where it ought to have been mentioned. Ultrafinitism certainly gets discussed a fair bit
on FOM for instance, but I saw nothing in the archives about this, other than my own vague statement last May to the effect that modular exponentiation will be problematic for the ultrafinitist,
which is what I would like to expand on.

THEOREM (Fermat). If p is prime, x^p = x (mod p). 
PROOF: Consider the set of all sequences of length p of symbols from an alphabet of size x. Its size is x^p. The number of distinct cyclic permutations of a given sequence divides p. But p is prime, so either there are p of them, or just one. In the latter case the sequence will consist of p repetitions of the same letter. There are x distinct cases of this. So the remaining x^p – x sequences are partitioned into orbits of size p. So p divides x^p – x, so x^p = x (mod p). QED

This is a very nice proof. But can it really be valid, except for small p? Consider the case of p=1031, x=2: we have to consider the set of _all_ binary sequences of length 1031. There would be far more sequences than Planck-time-by-Planck-volumes in the entire history of the observable universe from the big bang to the estimated death of the last star. Some would call that mysticism.

Now, it might be thought there could be an alternate ultrafinitary proof. Here is a reason to think otherwise: suppose for example Bounded Arithmetic could prove it. Then we could extract a
polynomial-time algorithm which, given x and p such that x^p =/= x (mod p), finds a non-trivial divisor of p. But no such algorithm is known. This isn’t definitive (if I could _prove_ there was no
such algorithm, I’d be rich) but it is doubtful that any exist.

This doesn’t just apply to BA and to the idea that feasibility means PTIME. It is enough to know that there is no known algorithm for factoring large numbers which is feasible in any sense, while modular exponentiation is well within current technology. We can easily code up a computer program to check that indeed x1031 = x (mod 1031) for all 0<=x<1031. Or even that 2^p =/= 2 (mod p) when

p = 25195908475657893494027183240048398571429282126204032027777137
83604366202070759555626401852588078440691829064124951508218929
85591491761845028084891200728449926873928072877767359714183472
70261896375014971824691165077613379859095700097330459748808428
40179742910064245869181719511874612151517265463228221686998754
91824224336372590851418654620435767984233871847744479207399342
36584823824281198163815010674810451660377306056201619676256133
84414360383390441495263443219011465754445417842402092461651572
33507787077498171257724679629263863563732899121548314381678998
85040445364023527381951378636564391212010397122822120720357,

so that, by theology, we know p is composite. But nobody knows a factor [see Wikipedia, “RSA Factoring Challenge”]. “p is composite” is a Delta_0 sentence (Sigma^b_1 in bounded quantifiers), for which we have a constructive proof, but no known ultrafinitary proof.

Notice what happens. Or rather, what doesn’t.

For p < 32, everything is just fine.

For p on the order of 210, the proof is problematic because “the set of all sequences of length p” is too big. But we can check all cases by direct computation.

For p on the order of 264, the idea of even a single “sequence of length p” is now doubtful, being at the edge of current storage techonology. And we can’t hope to check all cases. But it is still feasible to directly check whether any given p is prime, and to check the equation for any given x,p pair in this range.

For p on the order of 22^10, “the set of all sequences of length p” ought to be empty. There are no such sequences in reality! Yet we can still check any given x,p pair. It is tricky to check whether a given p really is prime without circularly resorting to theological number theory. But there are still ways to go about it. In fact, there is already lots of evidence at this level, in that much of modern cryptography depends on Fermat’s little theorem for numbers of this size, and it works!

Of course none of the above is statistically significant with respect to the Pi_1 theorem, but that’s not the point. The problem for ultrafinitism is, as I say, already Delta_0: why should it be right even in most of these cases, never mind be infallible? (There is no probabilistic feasible algorithm to factor large numbers, either.) Also, why is there no sign of the difference between feasible and
infeasible?

Because from an ultrafinitist perspective, the numbers in these levels are qualitatively different. Certainly our ability to check the statement changes drastically. And yet, there is no hint of any
ontological change. Nothing at all happens to Fermat’s little theorem, even up to x^p = 22^210.

The constructivist simply affirms that Elementary Recursive Arithmetic is TRUE; God made the integers, as Kronecker said. The ultrafinitist has some explaining to do. If these are just our collective delusions and meditations about entities that can’t exist in reality, then how to explain the very real computations?

Daniel Mehkeri

Advertisements