##
Commented out *December 17, 2007*

*Posted by Alexandre Borovik in Uncategorized.*

trackback

trackback

I want to add that I plan to touch in our blog a theme which was commented out from our grant proposal as excessively controversial:

% \subsection*{And the last, but not least\dots}

%

% We shall try to sort out the mess of misunderstanding surrounding

% the concept of infinity in literature on mathematical education.

A language question: is the expression “to comment out” used outside of TeX and programming communities?

The issue of infinity in education is interesting in view of Peter McBurney’s commented on my post “Case Study III: Computer science: The bestiary of potential infinities”:

Your example from Computer Science reminds me of something often forgotten or overlooked in present-day discussions of infinite structures: that some of the motivation for the study of the infinite in mathematics in the 19th and early 20th centuries came from physics where (strange as it may seem to a modern mathematician or a computer scientist) the infinite was used as an approximation to the very-large-finite.

I had personal learning experience related to that issue. It so happened that I was a guinea pig in a bold educational experiment: at my boarding school, my lecturer in mathematics attempted to build the entire calculus in terms of finite elements. It sounded like a good idea at the time: physicists formulate their equations in terms of finite differences — working with finite elements of volume, mass, etc, then they take the limit

and replace by the differential , etc., getting a differential equation instead of the original finite difference equation. After that, numerical analysts solve this equation by replacing it with an equation in finite differences. The question: “Why bother with the differential equations?” is quite natural. Hence my lecturer bravely started to re-build, from scratch, calculus in terms of finite differences. Even more brave was his decision to test it on schoolchildren. Have you ever tried to prove, within the – language for limits, the continuity of the function at an arbitrary point by a *direct explicit computation of* *in terms of* ? The scale of the disaster became apparent only when my friends and I started, in revising for exams, to actually read the mimeographed lecture notes. We realized very soon that we had stronger feelings about mathematical rigor than our lecturer possibly had (or was prepared to admit, being a very good and practically minded numerical analyst); perhaps my teacher could be excused because it was not possible to squeeze the material into a short lecture course without sacrificing rigor. So we started to recover missing links, and research through books for proofs, etc. The ambitious project deflated, like a pricked balloon, and started to converge to a good traditional calculus course. The sheer excitement of the hunt for another error in the lecture notes still stays with me.

And I learned to love actual infinity — it makes life so much easier.

My story, however, has a deeper methodological aspect. Vladimir Arnold forcefully stated in one of his books that it is wrong to think about finite difference equations as approximations of differential equations. It is the differential equation which approximates finite difference laws of physics; it is the result of taking an asymptotic limit at zero. Being an approximation, it is easier to solve and study.

In support to his thesis, Arnold refers to a scene almost everyone has seen: old tires hanging on sea piers to protect boats from bumps. If you control a boat by measuring its speed and distance from the pier and select the acceleration of the boat as a continuous function of the speed and distance, you can come to the complete stop precisely at the wall of the pier, but only after* infinite time*: this is an immediate consequence of the uniqueness theorem for solutions of differential equations. To complete the task in sensible time, you have to alow your boat to gently bump into the pier. The asymptotic at zero is not always an ideal solution in the real world. But it is easier to analyze!

[Here, I cannibalise a fragment from my book.]

I love the idea that the “infinite was used as an approximation to the very-large-finite.” That’s how I view it!

I knew there was a reason as to why I really dislike Numerical Analysis!

“We realized very soon that we had stronger feelings about mathematical rigor..”I think this is due to the course content too, and how much of it nowadays really depends on computers. When it comes to giving a theorem about convergence, it is left to the appendix because “it was too frightening and might scare us”. Pfft. Then I wonder why I hate this course.

I risk offending applied mathematicians, but they seem to say “I will leave the proof of such a such theorem to the pure mathematicians”. \end{aside!}

[…] of Infinity According to Vladimir Arnold (see my previous post), these are manifestation sin infinity in the real […]

I never knew Arnold said that. Thanks a lot! It’s very much how I feel about physics.

“commented out of” – 308,000 google hits (mostly programmers but not much tex)

“commented out from” – 18,100 google hits (top hit, your site)

[…] post do vlog A Dialogue on Infinity, Alexandre Borovik escreve sobre uma experiÃªncia que um dos seus […]

We realized very soon that we had stronger feelings about mathematical rigor than our lecturer possibly had (or was prepared to admit, being a very good and practically minded numerical analyst); perhaps my teacher could be excused because it was not possible to squeeze the material into a short lecture course without sacrificing rigor.Sacrificing the rigour might, or should have, been part of the point, in an attempt to develop the intution first, cf. an essay of Poincare on intuiton and logic. some physist do start teaching basic analysis using finite approximations; it is only later that rigourous definitons involving infinity are given..

I’ve never met a physicist with a sense of mathematical rigour. The whole discipline of physics is premised on the making of grand, sweeping claims (called “laws of nature”) which are invariably contradicted by real-life details in every particular case. Physics is a theory based on abstraction away from these confounding details. The same problem bedevils economics.

On the question (comment #7) about development of intuition: Surely the point of rigour in analysis is to demonstrate to us that our (raw) intuition is often dead wrong. Nothing could be more intuitive, for instance, than that the infinite limit of a converging sequence of continuous functions is also continuous, or that a function cannot be both everywhere continuous and nowhere differentiable, or that it is impossible to fill a 2-dimensional space with a 1-dimensional curve.

Rigour and intuition are polar opposites, not complementary, at least in any math involving the infinite.

Peter wrote:

“Rigour and intuition are polar opposites, not complementary, at least in any math involving the infinite.”

I see it differently: rigour is a foundation of intuition.

Peter wrote:

Surely the point of rigour in analysis is to demonstrate to us that our (raw) intuition is often dead wrong. Nothing could be more intuitive, for instance, than that the infinite limit of a converging sequence of continuous functions is also continuous, or that a function cannot be both everywhere continuous and nowhere differentiable, or that it is impossible to fill a 2-dimensional space with a 1-dimensional curve.

I would say those are examples of perverted intuition, which lost touch with physics or geometry. It’s pretty intuitive that you can staff the box full of very thin threads, and if you bend wire a lot you will get scratchy saw-like shape. Of cause you have examples of bad intuition, like for example some probability tricks, but they mostly depened not on luck rigor, but on bad understanding of underlying concepts. Of cause rigor by itself often help, kind of like syntactic check

I think that an infinite sequence of continuous functions having a discontinuous limit is very intuitive. A nowhere differentiable continuous function is much less intuitive, as are space filling curves.

I had one of my students (as CS major) ask me this semester why we bother with Turing machines, since real computers have only a finite amount of memory. I tried to explain how we frequently approximate finite things with infinite things, but I don’t think he really got it.