This is a nearly complete, previously unpublished manuscript by Boris Weisfeiler. The results were announced by him in August 1984. Soon after, in early January 1985, he disappeared during a hiking trip in Chile.
The investigation into Boris Weisfeiler disappearance is still ongoing in Chile, see http://www.weisfeiler.com/boris
Monsieur, x^p = x (mod p), donc Dieu existe. Repondez !
In _Constructivism in Mathematics_, Troelstra and van Dalen
famously write,…we do accept that “in principle” we can view 10^{10}^10 as a sequence of units (i.e. we reject the ultrafinitist objection), and the authors are not sure that this is really less serious than the platonist extrapolation. At least it needs arguments.
I would like to make such an argument. I think it is new: at least, a shallow search fails to turn anything up where it ought to have been mentioned. Ultrafinitism certainly gets discussed a fair bit
on FOM for instance, but I saw nothing in the archives about this, other than my own vague statement last May to the effect that modular exponentiation will be problematic for the ultrafinitist,
which is what I would like to expand on.THEOREM (Fermat). If p is prime, x^p = x (mod p).
PROOF: Consider the set of all sequences of length p of symbols from an alphabet of size x. Its size is x^p. The number of distinct cyclic permutations of a given sequence divides p. But p is prime, so either there are p of them, or just one. In the latter case the sequence will consist of p repetitions of the same letter. There are x distinct cases of this. So the remaining x^p – x sequences are partitioned into orbits of size p. So p divides x^p – x, so x^p = x (mod p). QEDThis is a very nice proof. But can it really be valid, except for small p? Consider the case of p=1031, x=2: we have to consider the set of _all_ binary sequences of length 1031. There would be far more sequences than Planck-time-by-Planck-volumes in the entire history of the observable universe from the big bang to the estimated death of the last star. Some would call that mysticism.
Now, it might be thought there could be an alternate ultrafinitary proof. Here is a reason to think otherwise: suppose for example Bounded Arithmetic could prove it. Then we could extract a
polynomial-time algorithm which, given x and p such that x^p =/= x (mod p), finds a non-trivial divisor of p. But no such algorithm is known. This isn’t definitive (if I could _prove_ there was no
such algorithm, I’d be rich) but it is doubtful that any exist.This doesn’t just apply to BA and to the idea that feasibility means PTIME. It is enough to know that there is no known algorithm for factoring large numbers which is feasible in any sense, while modular exponentiation is well within current technology. We can easily code up a computer program to check that indeed x^{1031} = x (mod 1031) for all 0<=x<1031. Or even that 2^p =/= 2 (mod p) when
p = 25195908475657893494027183240048398571429282126204032027777137
83604366202070759555626401852588078440691829064124951508218929
85591491761845028084891200728449926873928072877767359714183472
70261896375014971824691165077613379859095700097330459748808428
40179742910064245869181719511874612151517265463228221686998754
91824224336372590851418654620435767984233871847744479207399342
36584823824281198163815010674810451660377306056201619676256133
84414360383390441495263443219011465754445417842402092461651572
33507787077498171257724679629263863563732899121548314381678998
85040445364023527381951378636564391212010397122822120720357,so that, by theology, we know p is composite. But nobody knows a factor [see Wikipedia, “RSA Factoring Challenge”]. “p is composite” is a Delta_0 sentence (Sigma^b_1 in bounded quantifiers), for which we have a constructive proof, but no known ultrafinitary proof.
Notice what happens. Or rather, what doesn’t.
For p < 32, everything is just fine.
For p on the order of 2^{10}, the proof is problematic because “the set of all sequences of length p” is too big. But we can check all cases by direct computation.
For p on the order of 2^{64}, the idea of even a single “sequence of length p” is now doubtful, being at the edge of current storage techonology. And we can’t hope to check all cases. But it is still feasible to directly check whether any given p is prime, and to check the equation for any given x,p pair in this range.
For p on the order of 2^{2}^10, “the set of all sequences of length p” ought to be empty. There are no such sequences in reality! Yet we can still check any given x,p pair. It is tricky to check whether a given p really is prime without circularly resorting to theological number theory. But there are still ways to go about it. In fact, there is already lots of evidence at this level, in that much of modern cryptography depends on Fermat’s little theorem for numbers of this size, and it works!
Of course none of the above is statistically significant with respect to the Pi_1 theorem, but that’s not the point. The problem for ultrafinitism is, as I say, already Delta_0: why should it be right even in most of these cases, never mind be infallible? (There is no probabilistic feasible algorithm to factor large numbers, either.) Also, why is there no sign of the difference between feasible and
infeasible?Because from an ultrafinitist perspective, the numbers in these levels are qualitatively different. Certainly our ability to check the statement changes drastically. And yet, there is no hint of any
ontological change. Nothing at all happens to Fermat’s little theorem, even up to x^p = 2^{2}^2^{10}.The constructivist simply affirms that Elementary Recursive Arithmetic is TRUE; God made the integers, as Kronecker said. The ultrafinitist has some explaining to do. If these are just our collective delusions and meditations about entities that can’t exist in reality, then how to explain the very real computations?
Daniel Mehkeri
The John Templeton Foundation serves as a philanthropic catalyst for discoveries relating to the Big Questions of human purpose and ultimate reality. We support research on subjects ranging from complexity, evolution, and infinity to creativity, forgiveness, love, and free will. We encourage civil, informed dialogue among scientists, philosophers, and theologians and between such experts and the public at large, for the purposes of definitional clarity and new insights.
Our vision is derived from the late Sir John Templeton’s optimism about the possibility of acquiring “new spiritual information” and from his commitment to rigorous scientific research and related scholarship. The Foundation’s motto, “How little we know, how eager to learn,” exemplifies our support for open-minded inquiry and our hope for advancing human progress through breakthrough discoveries.
In his Systematic Theology, Vol. III, Wolfhart Pannenberg argues that God as eternal comprehends the different moments of time simultaneously and orders them to constitute a whole or totality. The author contends that this approach to time and eternity might solve the logical tension between the classical notion of divine sovereignty and the common sense belief in creaturely spontaneity/human freedom. For, if the existence of the events constituting a temporal sequence is primarily due to the spontaneous decisions of creatures, and if their being ordered into a totality or meaningful whole is primarily due to the superordinate activity of God, then both God and creatures play indispensable but nevertheless distinct roles in the cosmic process.
Alexandre was asking there why the results of such amalgamation should be the kinds of entity we encounter through different routes. I should imagine that the answer to this has much in common with answers to the questions Michiel Hazewinkel is posing in Niceness Theorems:
Many things in mathematics seem almost unreasonably nice. This includes objects, counterexamples, proofs. In this preprint I discuss many examples of this phenomenon with emphasis on the ring of polynomials in a countably infinite number of variables in its many incarnations such as the representing object of the Witt vectors, the direct sum of the rings of representations of the symmetric groups, the free lambda ring on one generator, the homology and cohomology of the classifying space BU, … . In addition attention is paid to the phenomenon that solutions to universal problems (adjoint functors) tend to pick up extra structure.
Evidently Hazewinkel sees category theory as the right tool for the problem. So might Fraïssé amalgamation be given a category theoretic gloss? Here are a few attempts.
Trevor Irwin, Fraisse limits and colimits with applications to continua:
The classical Fraïssé construction is a method of taking a direct limit of a family of finite models of a language provided the family fulfills certain amalgamation conditions. The limit is a countable model of the same language which can be characterized by its (injective) homogeneity and universality with respect to the initial family of models. A standard example is the family of finite linear orders for which the Fraïssé limit is the rational numbers with the usual ordering.
We present this classical construction via category theory, and within this context we introduce the dual construction. This respectively constitutes the Fraïssé colimits and limits indicated in the title. We provide several examples.
We then present the projective Fraïssé limit as a special case of the dual construction, and as such it is the categorical dual to the classical (injective) Fraïssé limit. In this dualization we use a notion of model theoretic structure which has a topological ingredient. This results in the countable limit structures being replaced by structures which are zero-dimensional, compact, second countable spaces with the property that the relations are closed and the functions are continuous.
We apply the theory of projective Fraïssé limits to the pseudo-arc by first representing the pseudo-arc as a natural quotient of a projective Fraïssé limit. Using this representation we derive topological properties of the pseudo-arc as consequences of the properties of projective Fraïssé limits. We thereby obtain a new proof of Mioduszewski’s result that the pseudo-arc is surjectively universal among chainable continua, and also a homogeneity theorem for the pseudo-arc which is a strengthening of a result due to Lewis and Smith. We also find a new characterization of the pseudo-arc via the homogeneity property.
We continue with further applications of these methods to a class of continua known as pseudo-solenoids, and achieve analogous results for the universal pseudo-solenoid.
Wieslaw Kubiś, Fraisse sequences – a category-theoretic approach to universal homogeneous structures:
We present a category-theoretic approach to universal homogeneous objects, with applications in the theory of Banach spaces and in set-theoretic topology.
Olivia Caramello, Fraïssé’s construction from a topos-theoretic perspective:
We present a topos-theoretic interpretation of (a categorical generalization of) Fraisse’s construction in model theory, with applications to countably categorical theories.
What appears to be missing from the increasingly intensive discussion is that the REF proposal provides not just the poison to kill independent academic research, it offers a syringe for injection, too. The latter is described in a few innocuous lines about the aims of the exercise:
“We will be able to use the REF to encourage desirable behaviours at three levels:
- THE BEHAVIOUR OF INDIVIDUAL RESEARCHERS WITHIN A SUBMITTED UNIT […]“
[http://www.hefce.ac.uk/pubs/hefce/2009/09_38/09_38.pdf , page 8]
The emphasis on inducing change in the behaviour of “individual researchers” is the result of a slow evolution of the RAE/REF. In 1996 and in 2001, the RAE went to great lengths to ensure that individual researchers could not be identified in the panels’ responses. This changed in 2008, when the percentages of the submission with each number of stars were published. So it was possible, in the case of a small unit, to work out exactly how many papers were internationally excellent, etc., and make a fairly good guess which papers they were.
The passage in the REF proposal concerned with “individual researchers” is much more worrying, especially since this time “the overall excellence profile will combine three sub-profiles – one for each of output quality, impact and environment – which will also be published.”
If “behaviour” just meant “doing good/bad/no research”, it would not be so terrible, but since extraneous things like “impact” now loom large, HoDs will be able to use this to warn staff off doing their preferred research and onto more “impactful” projects. There is a danger that, at department level, the REF might be translated into unheard of levels of bullying and harassment.
Please sign the Number 10 Petition:
http://petitions.number10.gov.uk/REFandimpact/
Please also sign the UCU petition STAND UP FOR RESEARCH (even if you are not an UCU member; signing is open to everyone):
http://www.ucu.org.uk/standupforresearch
Non-Archimedean ordered Abelian groups exist: for example, the group of ordered pairs (x,y) of integers, with the left-lexicographic ordering, so that (x,y)<(a,b) if and only if either x<a, or else x=a and y<b.
The main point of this article is to observe that an ordered
Abelian group is Archimedean if and only if it has a completion.
I do not know of a reference for this result, though I can well imagine that a reference exists.
The observation about completeness arises from considering that the field of real numbers is complete in two ways:
In sense (2), the field of real numbers is just one example of a
complete field. The complex numbers compose another such field,
as do the p-adic completions of the field of rational numbers.
Every valued field has a completion.
In sense (1) however, the field of real numbers is unique.
I have encountered at least one mathematician who seemed not to
be aware of this, or to have forgotten it, having apparently
confused completeness of valued fields with completeness of
ordered fields.
Possibly the distinction between ordered fields and valued fields
is like that between induction and completion: a distinction that
may be overlooked in one’s early education and then never
returned to.
At the end of his book Calculus, Michael Spivak constructs
the field of real numbers and proves its uniqueness (up to
isomorphism). It was from Spivak’s book that, as a student, I
first learned of the uniqueness of the real field. Spivak
praises the “one truly first-rate idea” in its construction:
Dedekind’s notion of a cut. Yet Spivak is disparaging of
the “drudgery” of going through the details of the construction.
I revisited the construction recently, in a course on
non-standard analysis at the Nesin Mathematics Village. If the
construction of the real numbers was going to be drudgery, I
wanted to see what more general results could be found in the
process.
The Dedekind cut construction gives a completion to every
ordered set (that is, totally ordered set). Indeed, let A be
such a set, and if x is in A, let (x) be the set of elements of A
that are less than or equal to x. Such sets compose a basis for
a topology on A. A cut of A can be defined as a nonempty closed
set in this topology, except A itself, unless this has a greatest
element. Let c(A) be the set of cuts of A. Then:
Again, an ordered Abelian group is Archimedean if, for any two
positive elements a and b, some multiple of a exceeds b. In
other words, a and b have a ratio in the sense of
Definition 4 of Book V of Euclid’s Elements. Indeed, the
positive part of an Archimedean ordered Abelian group would seem
to be just the set of magnitudes that have a ratio to some given
magnitude: the set is closed under addition, and under
subtraction of a lesser magnitude from a greater.
For any ordered Abelian group A, Archimedean or not, one can take
the completion of the underlying ordered set, and then extend the
definition of addition to the completion. One way to do this is
to define the sum of non-empty proper closed subsets X and Y of A
as the closure of the set of sums x+y, where x is in X and y is
in Y. Then c(A) becomes an Abelian monoid.
However, if A is a not Archimedean, then A cannot be complete,
since if the set of integral multiples of a positive element a is
bounded above by b, then b-a is also an upper bound. In c(A),
the set of multiples of a does have a supremum, c; but then c+a =
c.
Among Archimedean ordered Abelian groups, the group of integers
is the only discrete example, and this is complete. If A is a
dense Archimedean ordered Abelian group, then c(A) is the same.
If A and B are complete dense ordered Abelian groups, with
positive elements a and b respectively, then there is a unique
isomorphism from A to B taking a to b. The idea is that the
ordered field of rational numbers embeds in A under the map
taking 1 to a; then this map extends uniquely to the completion
of the rational field, which is the real field.
Consequently, for every real number a greater than 1, the
additive group of real numbers is isomorphic to the
multiplicative group of positive real numbers under a map taking
1 to a: this is the map commonly denoted by x |—> a^{x}. One
shows that multiplication distributes over addition on the
positive reals, then on all reals, and so the reals compose a
complete ordered field, which must be unique, because the
underlying group is unique as a dense complete ordered Abelian
group.