jump to navigation

Model Theory and Category Theory July 29, 2008

Posted by dcorfield in Uncategorized.

David Kazhdan has some interesting things to say about model theory, and in particular its relationship to category theory, in his Lecture notes in Motivic Integration.

In spite of it successes, the Model theory did not enter into a “tool box” of mathematicians and even many of mathematicians working on “Motivic integrations” are content to use the results of logicians without understanding the details of the proofs.

I don’t know any mathematician who did not start as a logician and for whom it was “easy and natural” to learn the Model theory. Often the experience of learning of the Model theory is similar to the one of learning of Physics: for a [short] while everything is so simple and so easily reformulated in familiar terms that “there is nothing to learn” but suddenly one find himself in a place when Model theoreticians “jump from a tussock to a hummock” while we mathematicians don’t see where to “put a foot” and are at a complete loss.



Same but multifaceted July 11, 2008

Posted by Alexandre Borovik in Uncategorized.

Continuing the topic of “sameness”, it is interesting to compare behaviour of two familiar objects: the field of real numbers \mathbb{R} and the field of complex numbers \mathbb{C}.

\mathbb{C} is uncountably categorical, that is, it is uniquely described in a language of first order logic among the fields of the same cardinality.

In case of \mathbb{R}, its elementary theory, that is, the set of all closed first order formulae that are true in \mathbb{R}, has infinitely many models of cardinality continuum 2^{\aleph_0}.

In naive terms, \mathbb{C} is rigid, while \mathbb{R} is soft and spongy and shape-shifting. However, \mathbb{R} has only trivial automorphisms (an easy exercise), while \mathbb{C} has huge automorphism group, of cardinality 2^{2^{\aleph_0}} (this also follows with relative ease from basic properties of algebraically closed fields). In naive terms, this means that there is only one way to look at \mathbb{R}, while \mathbb{C} can be viewed from an incomprehensible variety of different point of view, most of them absolutely transcendental. Actually, there are just two comprehensible automorphisms of \mathbb{C}: the identity automorphism and complex conjugation. It looks like construction of all other automorphisms involves the Axiom of Choice. When one looks at what happens at model-theoretic level, it appears that “uniqueness” and “canonicity” of a uncountable structure is directly linked to its multifacetedness. I am still hunting appropriate references for this fact. Meanwhile, I got the following e-mail from a model theorist colleague, Zoe Chatzidakis:

Models of uncountably categorical theories behave really like vector spaces: if inside a model M you take a maximal independent set X of elements realizing the generic type, and take any permutation of X, it extends to an automorphism of the model. So, if M is of size \kappa > \aleph_0, then any basis has size \kappa, and its automorphism group has size 2^\kappa.

I don’t know a reference, but it should be in any model theory book which talks about strongly minimal sets. Or maybe in the paper by ??? Morley ??? which shows that you have a notion of dimension and so on? I.e., that \aleph_1 categorical theories and strongly minimal sets are the same.

It is really a well-known result, so you probably don’t need a reference if you cite it in a paper.

Facing Eternity July 4, 2008

Posted by Alexandre Borovik in Uncategorized.
add a comment

Mikhail Zlatkovsky. Facing Eternity.

Under a changed title “Coat Star” the cartoon won first prize at Ken Sprague Fund competition at  Earthworks 2008.

Ultraproducts of fields, II June 28, 2008

Posted by Alexandre Borovik in Uncategorized.
1 comment so far

I continue my post on ultraproducts. So, we want to understand in what sense an ultraproduct of finite fields F_i of unbounded order is a limit at infinity of finite fields. The answer now should be obvious: since ultraproducts are residue fields for maximal ideals in the cartesian product

R = \prod F_i,

the topology in question should be the canonical topology (Zariski topology) of the spectrum of the ring \mbox{Spec}(R). It instantly follows from the description of ideals and maximal ideals in R that this is the Stone topology on the set of ultrafilters on \mathbb{N}, or, what is th same, the Cech-Stone compactification \beta\mathbb{N} of the set \mathbb{N} with descrete topology. Therefore the answer is: an ultraproduct is the limit in the Cech-Stone compactification of a discrete countable set.

I have to admit that at this point I reached limits of my knowledge of set-theoretic topology and had to dip into Wikipedia. It happened that \beta\mathbb{N}, and even more so its non-principal part {\mathbb{N}}^* = \beta{\mathbb{N}} \setminus {\mathbb{N}} is characterised by some unique properties: If the continuum hypothesis holds then {\mathbb{N}}^* is the unique Parovicenko space, up to isomorphism.

According to Wikipedia, a Parovicenko space is a topological space X satisfying the following conditions:

  • X is compact Hausdorff
  • X has no isolated points
  • X has weight c, the cardinality of the continuum (this is the smallest cardinality of a base for the topology).
  • Every two disjoint open Fσ subsets of X have disjoint closures
  • Every nonempty Gδ of X has non-empty interior.

As you can see, {\mathbb{N}}^* is uniquely characterised by very natural properties.

It is yet another manifestation of of one of the most pardoxical properties of mathematical infinity: canonicity of workable constructions in the infinite domain.

NHS invented a new type of infinity… June 21, 2008

Posted by Alexandre Borovik in Uncategorized.

My posts are likely to become shorter and sparser — as a result of a work trauma, I have developed a medical condition (see a photo) which makes typing very difficult, and I depend on the kind help of my wife with all my typing needs. On the bright side, my experience gave me a new understanding of infinity. From a mathematical viewpoint, indefinitely long waiting lists for treatment on NHS (National Health Service) were not something new, they were just a special case of potentially infinite natural series

Week 1, Week 2, Week 3, … etc.

going on and on in a very old-fashioned way. The real contribution of NHS to mathematics of infinity is that they make patients to wait (indefinitely again) to be included on a waiting list. It is an infinity of natural numbers enhanced by an additional constraint: you are not allowed to start counting.

A letter from a student June 17, 2008

Posted by Alexandre Borovik in Uncategorized.

Hello Professor,

I hope everything is good and that you recovered from the accident in your

I just wanted to share some personal thoughts. Suppose we have a system that is discrete and finite. We have the natural numbers {1,2,3,…,T} where T is the symbol for the ”biggest natural number”. We will never have a value for T but we accept that it is a fixed natural number. We can also include 0 in our system. How much mathematics can we do?

We could define the usual addition and multiplication. But we will have a problem when the result is ”greater than T”. But nothing ”greater than T” exist…

Suppose for example that T=10. Then we have 2+3=5, 4+5=9, 5+5=10. But what about 5+7? We could just define 5+7= err (error). just like the small calculators do when they reach their limit. But err is not in our system… We could work modulo T OR we could just say that 5+7=10=T. and 8+9=T. and 7+8=10=T etc.

(If we choose the last option then obviously 1+T=T and T+T=T. So T has a similar behaviour as the infinite that we learned at school. But the big difference is that in our system T is a fixed natural number!)

I just can not see why we NEED infinite to make mathematics. Is it a matter of convenience? Is it just to make things simpler? If this is the case then we should accept that infinite and continuous entities are just tools, ideas, symbols that make our life easier. But we should remember that IDEAS themselves are FINITE since they live in the finite world of our brains!

I think the whole problem is more about aesthetics. I think that someone can accept that only discrete and finite things exist and, at the same time, that this belief does not destroy all the beautifull mathematics that have been created until now.

While I was searching the Internet I found this PDF document which I attached and I found amusing. I also found other people expressing similar views (against infinite) like for example Alexander Yessenin-Volpin.

Ultraproducts of fields, I June 16, 2008

Posted by Alexandre Borovik in Uncategorized.

My immediate research interests more and more focus on an interplay between finite and infinite in algebra (at least this is where my chats with my PhD student drift to). In particular, I have to use frequently a specific construction, an ultrafilter product of fields. It is pretty sublime in the sense of David Corfield and leads to appearance of very interesting canonical objects.

We start with a family of fields F_i, i \in I . For simplicity assume that all fields are finite of unbounded order and that the index set I is just the set of natural numbers \mathbb{N} (actually this is the case most interesting to me). We form the Cartesian product R of F_i: this is just a set of infinite strings

\{ (f_1, f_2, \dots ) \mid f_i \in F_i \}

with componentwise operations of addition and multiplication. In what follows, we shall make frequent use of zero set of a string f = (f_1, f_2, \dots ), that is, the set of indices where the string components are zeroes:

zero(f) = \{ i \in {\mathbb{N}} \mid f_i =0\}.

Obviously, R is a commutative ring with unity. Let us look at its ideals. One can easily see that an ideal I in R is uniquely determined by the set

zero(I) = \{zero(f) \mid f \in I\};

indeed, non-zero components of a string f \in I can be arbitrarily changed without moving f outside of I by multiplying by appropriate string of invertible elements. One can also instantly see that zero(I) is a filter on \mathbb{N}, that is, it is a collection of non-empty subsets of \mathbb{N} closed under taking finite intersections and supersets, and, moreover, that the correspondence

I \mapsto zero(I)

is a one-to-one correspondence between proper ideals in R and filters of \mathbb{N} which preserves embedding of ideals and embedding of filters. Therefore, maximal ideals in R correspond to maximal filters on \mathbb{N}; the former and the latter exist by the Zorn Lemma, one of the equivalent formulations of the Choice Axiom. If now I is a maximal ideal in R, the fact that the factor ring R/I is a field and, in particular, has no zero divisors, translates in the fact that a maximal filter \mathcal{F} is an ultrafilter: it has the property that, for any subset X \subseteq \mathbb{N} either X or its complement {\mathbb{N}} \smallsetminus X belongs to \mathcal{F}.

Given an ultrafilter \mathcal{F} on \mathbb{N}, the ultraproduct F = \prod F_i/\mathcal{F} is nothing more than the corresponding residue field R/I. There are obvious principal (ultra)filters on \mathbb{N}, they consist of all subsets containing a given element i \in \mathbb{N}; obviously, the corresponding ultraproduct is just the original field F_i.

Non-principal filters do exists. One very interesting non-principal filter on \mathbb{N} is the Frechet filter consisting of all subsets with finite complements.

But if we take an ultrafilter containing a non-principal filter (it exists by the Zorn Lemma), the corresponding ultraproduct F has many marvelous properties. In particular, if all F_i have different characteristics, F turns out to be a field of characteristic zero (I leave the proof of this fact as an exercise to the reader).

In the next instalment of this post, I will discuss the meaning of a frequently used assertion that an ultraproduct F is a limit at infinity of finite fields F_i.

The Sublime June 11, 2008

Posted by dcorfield in Uncategorized.

There’s an interesting post – Whatever Happened to Sublimity? – at the blog Siris. It includes a quotation from Edmund Burke

But let it be considered that hardly any thing can strike the mind with its greatness which does not make some sort of approach toward infinity; which nothing can do while we are able to perceive its bounds; but to see an object distinctly, and to perceive its bounds, are one and the same thing. A clear idea is, therefore, another name for a little idea. (A Philosophical Inquiry into the Origin of Our Ideas of the Sublime and the Beautiful, Part II, Section V.)

A natural question to ask, then, is where do we encounter the sublime in mathematics? And an obvious answer, you might think, would be the mathematical infinite.

Joseph Dauben has an interesting section in his book – Georg Cantor: his mathematics and philosophy of the infinite, Harvard University Press, 1979 – on how Cantor, receiving such discouragement from his mathematical colleagues, found an audience in certain thinkers within the Catholic church. Where earlier in the nineteenth century any attempt to describe a completed infinity was viewed as a sacrilegious attempt to circumscribe God, some theologians were open to Cantor’s new hierarchy of infinities, with its unreachable Absolute Infinite leaving room for the divine.

Personally, set theory has rarely invoked in me a sense of the sublime. On the other hand, the following comment by Daniel Davis does:

Behrens and Lawson use stacks, the theory of buildings, homotopy fixed points, the above model category, and other tools to make it possible to use the arithmetic of Shimura varieties to help with understanding the stable homotopy groups of spheres.

There’s plenty about the infinite in that statement, it’s true, but there’s so much more to it than that.

Brandon Watson, the blogger at Siris, writes

…what often strikes me when I look around at the philosophical scene today is how foreign this has all become. There are a few exceptions, but sublimity has vanished as a serious concern.

With regard to the philosophy of mathematics in particular I couldn’t agree more.

Axiom of Choice, quotes June 1, 2008

Posted by Alexandre Borovik in Uncategorized.
  • The Axiom of Choice is obviously true, the Well-ordering theorem obviously false, and who can tell about Zorn’s lemma ?” ::— Jerry Bona -(This is a joke: although the axiom of choice, the well-ordering principle, and Zorn’s lemma are all mathematically equivalent, most mathematicians find the axiom of choice to be intuitive, the well-ordering principle to be counterintuitive, and Zorn’s lemma to be too complex for any intuition.)
  • “The Axiom of Choice is necessary to select a set from an infinite number of socks, but not an infinite number of shoes.” ::— Bertrand Russell -(The observation here is that one can define a function to select from an infinite number of pairs of shoes by stating for example, to choose the left shoe. Without the axiom of choice, one cannot assert that such a function exists for pairs of socks, because left and right socks are (presumably) identical to each other.)
  • “The axiom gets its name not because mathematicians prefer it to other axioms.” ::— A. K. Dewdney -( This quote comes from the famous April Fool’s Day article in the computer recreations column of the Scientific American , April 1989.)


Induction and recursion II May 28, 2008

Posted by David Pierce in Uncategorized.

Formal logic provides examples of structures that allow proof by induction, but not necessarily definition by recursion. Let us consider, for example, the propositional logic that is apparently due to Łukasiewicz. We first define the (well-formed) formulas of the logic. We start with some set of so-called propositional variables. Then the set of formulas is the smallest set of strings that contains these variables and is closed under certain formation rules, namely:

  1. the singulary operation converting a string S to its negation, ~S;
  2. the binary operation converting a pair (S, T) of strings to the implication, (S → T).

The set of theorems of the logic is the smallest set of formulas that contains all of the axioms and is closed under a certain rule of inference. The axioms are the formulas of any of three forms:

  1. (F → (G → F))
  2. ((F → (G → H)) → ((F → G) → (F → H)))
  3. ((~F → ~G) → (G → F))

Here F and G stand for formulas. The rule of inference is detachment (or modus ponens), the binary operation that converts a pair (F, (F → G)) of formulas into the formula G. As stated, this is only a partial operation; we can make it total by having it convert (F, H) to F whenever H does not have the form (F → G) for some formula G.
The “inductive” definition of theorems is thus similar to the inductive definition of formulas. In each case, we start with a set and close under one or more operations. Immediately, proof by induction is possible on the set so defined. For example, we can prove inductively that each formula features as many left as right brackets. A feature of theorems that is proved by induction will be given in a moment.
However, unlike the formation rules for formulas, our rule of inference for theorems is not obviously well-defined. To see that it is well-defined, we must establish “unique readability” of formulas, namely:

  1. each formula is exactly one of the following: a variable, a negation, or an implication;
  2. the formation rules are injective.

Here injectivity of the formation rules corresponds to injectivity of the successor-operation on the set of natural numbers. The partition of the set of formulas into variables, negations, and implications corresponds to the partition of the set of natural numbers into the successors and the initial number (zero or one, as one prefers).
Unique readability of formulas allows recursive definitions. Indeed, the definition of the rule of detachment can be understood as recursive: If we denote this operation by D, then we have

  1. D(F, P) = F for all propositional variables P;
  2. D(F, ~G) = F;
  3. D(F, (F → G)) = G;
  4. D(F, (H → G)) = F if H is not F.

Another example of a recursively defined function on formulas is a truth-assignment. Consider the set {true, false} as the two-element field {1, 0}. Given a function φ from the set of propositional variables into the this field, we have a unique function Φ on the set of all formulas that agrees with φ on the variables, that takes ~F to 1 + Φ(F), and that takes (F → G) to 1 + Φ(F) + Φ(F)Φ(G).
By induction, every theorem takes the value 1 under every truth-assignment. But the truth-assignment is not recursively defined on the set of theorems: it is recursively defined on the set of formulas. Since the rule of detachment is not injective, we do not automatically have recursively defined functions on the set of theorems.
As far as I know, we do not have an efficient algorithm to determine whether a given formula is indeed a theorem: this lack of an algorithm would appear to be connected to the non-injectivity of the rule of detachment. A formula carries within itself the method of its construction; but a theorem does not carry within itself its proof, and indeed it will have infinitely many proofs.