##
Monsieur, x^p = x (mod p), donc Dieu existe. Repondez! *March 12, 2011*

*Posted by Alexandre Borovik in Uncategorized.*

trackback

trackback

I took liberty of copying it from Foundations of Mathematics Digest. A post by Daniel Mehkeri:

**Monsieur, x^p = x (mod p), donc Dieu existe. Repondez !**

In _Constructivism in Mathematics_, Troelstra and van Dalen

famously write,…we do accept that “in principle” we can view 10

^{10}^10 as a sequence of units (i.e. we reject the ultrafinitist objection), and the authors are not sure that this is really less serious than the platonist extrapolation. At least it needs arguments.I would like to make such an argument. I think it is new: at least, a shallow search fails to turn anything up where it ought to have been mentioned. Ultrafinitism certainly gets discussed a fair bit

on FOM for instance, but I saw nothing in the archives about this, other than my own vague statement last May to the effect that modular exponentiation will be problematic for the ultrafinitist,

which is what I would like to expand on.THEOREM (Fermat). If p is prime, x^p = x (mod p).

PROOF: Consider the set of all sequences of length p of symbols from an alphabet of size x. Its size is x^p. The number of distinct cyclic permutations of a given sequence divides p. But p is prime, so either there are p of them, or just one. In the latter case the sequence will consist of p repetitions of the same letter. There are x distinct cases of this. So the remaining x^p – x sequences are partitioned into orbits of size p. So p divides x^p – x, so x^p = x (mod p). QEDThis is a very nice proof. But can it really be valid, except for small p? Consider the case of p=1031, x=2: we have to consider the set of _all_ binary sequences of length 1031. There would be far more sequences than Planck-time-by-Planck-volumes in the entire history of the observable universe from the big bang to the estimated death of the last star. Some would call that mysticism.

Now, it might be thought there could be an alternate ultrafinitary proof. Here is a reason to think otherwise: suppose for example Bounded Arithmetic could prove it. Then we could extract a

polynomial-time algorithm which, given x and p such that x^p =/= x (mod p), finds a non-trivial divisor of p. But no such algorithm is known. This isn’t definitive (if I could _prove_ there was no

such algorithm, I’d be rich) but it is doubtful that any exist.This doesn’t just apply to BA and to the idea that feasibility means PTIME. It is enough to know that there is no known algorithm for factoring large numbers which is feasible in any sense, while modular exponentiation is well within current technology. We can easily code up a computer program to check that indeed x

^{1031}= x (mod 1031) for all 0<=x<1031. Or even that 2^p =/= 2 (mod p) whenp = 25195908475657893494027183240048398571429282126204032027777137

83604366202070759555626401852588078440691829064124951508218929

85591491761845028084891200728449926873928072877767359714183472

70261896375014971824691165077613379859095700097330459748808428

40179742910064245869181719511874612151517265463228221686998754

91824224336372590851418654620435767984233871847744479207399342

36584823824281198163815010674810451660377306056201619676256133

84414360383390441495263443219011465754445417842402092461651572

33507787077498171257724679629263863563732899121548314381678998

85040445364023527381951378636564391212010397122822120720357,so that, by theology, we know p is composite. But nobody knows a factor [see Wikipedia, "RSA Factoring Challenge"]. “p is composite” is a Delta_0 sentence (Sigma^b_1 in bounded quantifiers), for which we have a constructive proof, but no known ultrafinitary proof.

Notice what happens. Or rather, what doesn’t.

For p < 32, everything is just fine.

For p on the order of 2

^{10}, the proof is problematic because “the set of all sequences of length p” is too big. But we can check all cases by direct computation.For p on the order of 2

^{64}, the idea of even a single “sequence of length p” is now doubtful, being at the edge of current storage techonology. And we can’t hope to check all cases. But it is still feasible to directly check whether any given p is prime, and to check the equation for any given x,p pair in this range.For p on the order of 2

^{2}^10, “the set of all sequences of length p” ought to be empty. There are no such sequences in reality! Yet we can still check any given x,p pair. It is tricky to check whether a given p really is prime without circularly resorting to theological number theory. But there are still ways to go about it. In fact, there is already lots of evidence at this level, in that much of modern cryptography depends on Fermat’s little theorem for numbers of this size, and it works!Of course none of the above is statistically significant with respect to the Pi_1 theorem, but that’s not the point. The problem for ultrafinitism is, as I say, already Delta_0: why should it be right even in most of these cases, never mind be infallible? (There is no probabilistic feasible algorithm to factor large numbers, either.) Also, why is there no sign of the difference between feasible and

infeasible?Because from an ultrafinitist perspective, the numbers in these levels are qualitatively different. Certainly our ability to check the statement changes drastically. And yet, there is no hint of any

ontological change. Nothing at all happens to Fermat’s little theorem, even up to x^p = 2^{2}^2^{10}.The constructivist simply affirms that Elementary Recursive Arithmetic is TRUE; God made the integers, as Kronecker said. The ultrafinitist has some explaining to do. If these are just our collective delusions and meditations about entities that can’t exist in reality, then how to explain the very real computations?

Daniel Mehkeri

Zeilberger has a way (although not a nice one, IMO). He distinguishes finite integers like 53 from symbolic integers like n. See end of http://users.uoa.gr/~apgiannop/zeilberger.pdf

@Veky: yours is a very good point. Zeilberger’s approach is actually quite applicable here, and becomes even more obvious if one replaces arithmetic modulo large prime by calculations in the group of points of an elliptic curve over a large finite field.

As I say the problem is not just the Pi_1 theorem

but is already Delta_0. In Zeilberger’s terms I

guess I would say the problem isn’t just the theorem,

but what happens when you substitute specific values

into the theorem, like x=2 and p=[RSA-2048 challenge,

quoted above]

So I don’t see how Zeilberger’s approach works, and

it’s not clear to be that Zeilberger thinks it’s

applicable here. Though this may just be that

I misunderstand!

(Ideally I’d like to have this debate on FOM so

more people can respond.)

I mean that what matters in practical application of modular arithmetic is that we are dealing with sufficiently long strings of symbols which can be manipulated by certain efficient algorithm and provide for a variety of one-way (or trap-door) functions. When you are dealing with a prime number which is several hundred digits long, it is irrelevant whether you can reach it by adding 1: what matters is that we can compute in a very large cyclic group. The same group can be provided within a different ambient structure, group of points of elliptic curve, where counting by 1 is more or less meaningless because we do not have a distinguished starting point.

Sure you can do the computations on large numbers without benig able to construct them from 0 by repeatedly adding 1.

My point is that you cannot explain the results of the computations.

If they are just strings of symbols, then how to justify induction. e.g. if f is an efficient algorithm and f(x) = f(x+1) how can I justify concluding f(0)=f(2^2^10) ?

And if I can’t justify induction, then how do I prove Fermat’s little theorem?

I need to tread warily here: Fermat’s Little Theorem is a special case of Lagrange’s Theorem, which is accepted to be true for all finite group, including non-commutative groups. I have no idea whether Lagrange’s Theorem is being based on induction or not; at the first glance, it is not.

But: my previous assertion is true modulo the description of units (invertible elements) in the ring Z/pZ which uses division with remainder, and this could be more open to ultrafiniterist scepticism. Is that what you mean? At a symbolical level, division with remainder is a remarkably easy real time operation.

Well, I think I am being generous by ignoring group theory. Going to group theory should make matters even worse for an ultrafinitist.

What is a cyclic group?

Isn’t it a group in which every element can be reached by repeated multiplication from a single generator?

How can an ultrafinitist believe that a large finite group is cyclic, but disbelieve that a large number can be reached by repeatedly adding 1?

It depends on how this finite group is presented. In this context, the concept of an “abstract” cyclic group is meaningless, what matters is the concrete representation.

Take a certain concrete representation of a large finite group. Consider what it means to constructively prove that g generates G:

*FOR ALL* h in G *THERE EXISTS* n < |G| such that g^n = h.

For a constructivist there is no problem. There is

always an algorithm to find n; even if it's not

practical, given enough time and space, it will work.

"Given enough time and space" is not an excuse

available to the ultrafinitist. They don't believe

an infeasible computation "really" terminates at all, so it can't be a proof of anything.

So the ultrafinitist runs into the discrete log problem, which for many concrete group representations is believed to be hard.

Similarly it seems an ultrafinitary proof of Fermat's

little theorem involves efficient factoring, as I originally said.

What really remarkable is that computations in very big finite cyclic groups are routinely carried out, every second, in millions of computers and cryptographic microchips all over the world — and some of these mathematical objects do not exist for an ultrafinitist.

Returning to in your example, if we know prime factors of the order of G, then checking that a specific element g generates G is easy: you need to check that g^k =/= 1 for al proper divisors k of |G|. Things are trickier when you do not know the prime factors of |G| — but in many applications, we do not care, of this, we care only that G *behaves* like a cyclic group, up to some, astronomically small probability of an error.

How would ultrafinitist assess probabilistic proofs like the Miller-Rabin Primality tests? if you run it 100 times and the test does not refute that the tested number p is prime, than probability that p is not prime becomes (provably?) smaller than 1/2^100. What are numbers like 1/2^100 are for ultrafinitist?

Yes, that’s why I find it so interesting. Modular exponentiation with 1024 bits is NOTHING these days.

Before the ultrafinitist worries about 2^-100 (nitpick: isn’t it 4^-100?), why does the M-R test (or even a deterministic Pratt certificate) have anything to do with primality at all ? Back to Fermat’s little theorem !

It’s funny, I’ve heard a lot of debate about how, if p passed the M-R test so many times, do we REALLY know it’s prime? Sometmies they factor in the probability that the random number generator is bad, or the hardware is bad, or whatever. I’ve never heard anyone factor in the probability that g^p might not really exist …

@Daniel Mehkeri:

“I’ve never heard anyone factor in the probability that g^p might not really exist …”

This is brilliant!

Yes, of course, you are right, it is 4^-100. But what is the difference if 2^-100 and 4^-100 both do not exist?