# Fuzziness and falsehood in mathematics

Mathematics is the most precise and exact science … Their history shows us many examples of serious errors. How to limit the risks of producing others?

By wanting to go fast and as far as possible, mathematicians take risks which they do not always dominate and which sometimes lead them to state false theorems, or true theorems based on false proofs. With the increasing complexity of the areas covered, new methods are becoming necessary to track down errors.

# A reproducibility crisis

For the past fifteen years, various leading scientific journals have alerted to the problem of reproducibility in science. In May 2016, the British journal Nature published a study in which it appeared that 70% of the researchers it had interviewed said they had at least once failed in trying to reproduce a published result that interested them.

Mathematics did not seem concerned; however, verifying a published demonstration is a form of “reproduction”. When a researcher proposes a result and indicates his demonstration, he claims to have an argument leading him with certainty to the conclusion; to read and control one’s affirmations is to reproduce this mental experience. The problem of reproducibility therefore concerns mathematics and we will see that it is particularly delicate there.

You would think that any competent mathematician could read any mathematical article and verify the proofs presented there. This is wrong for several reasons.

First of all, reading a mathematical article often requires a good knowledge of the concepts, definitions, previous results and methods of the subject to which it relates. Since the breadth of mathematics fields has grown dramatically in the twentieth century, each researcher is only proficient in a small proportion of the thousands of mathematical papers published each year.

There is, however, a deeper reason that makes verification difficult even for a specialist: a demonstration is only a more or less detailed indication of an argumentative path, sometimes with important passages omitted or dangerous shortcuts.

To read a demonstration step by step, you must: be very well informed about the field; be ready to work at length on certain steps of the reasoning which require unexplained calculations or the treatment of non-detailed cases; sometimes also schedule computer checks because it is inconceivable otherwise; well, in some cases, to be as brilliant as the author, and in mathematics there are exceptional researchers with above-average abilities who are unaware that it is necessary to explain points which seem obvious to them … and which are only for them.

From this situation, it results that many errors are made, published and sometimes ignored for quite a long time. Several books have been devoted to errors in mathematics, but significant progress is also being made in checking and correcting the most important results.

# Mechanically verifiable: formal proofs

There are several notions of “formal proof” which, when written down, can be verified mechanically, that is to say by a computer program. Unfortunately, it is not these formal proofs that mathematicians write in their papers, as it would be too long and would obscure important ideas in a jumble of logical arguments and micro-calculations.

For David Ruelle, a physicist-mathematician at IHES, near Paris, “human mathematics is actually talking about formal proofs, not actually writing them down. The mathematician asserts quite convincingly that there is a certain formal text leading to the results he proposes, and that it would be possible to make it fully explicit, but he does not do so because it would be difficult work unsuited to the brain. human, who is not good at checking that a long formal text is free from errors. Human mathematics is a kind of dance around an unwritten formal text, which, if written, would be unreadable. “

The problem of mathematical errors is not new, especially since the notions of formal proof which allow everyone to agree — we will come back to this — have only existed since the beginning of the twentieth century. Throughout its history, mathematics has been confronted with beliefs in inaccurate results, or with the illusion of taking a proof to be a true statement, but which in reality is not demonstrated due to missing or false elements. in reasoning, or sometimes because of poorly understood concepts such as number or function.

Here are a few quick examples from the greatest mathematicians that show that no one achieves perfection in mathematics.

In Greece, in the 6th century BC, the mysterious Pythagoras maintained that the ratio of two magnitudes is always equivalent to the ratio of two whole numbers. The demonstrations he discovered in his day of the immeasurability of the diagonal of a square with its side, or in modern parlance of the irrationality of √ — 2, he seems to have troubled members of his school. Note, however, that historians are unanimous in considering that correcting the Pythagorean error was one of the most important steps in the progress of mathematics.

# Everything is almost perfect at Euclide

About two centuries later, Euclid left us a treatise, the Elements, of an astonishing maturity and rigor that even today we can read, understand and approve the gist of his demonstrations. Yet careful readers of Euclid have spotted a series of omissions, for example in his formulation of the axioms of geometry, or of imprecisions rendering some of his proofs false or incomplete. The use of figures leads him to arguments that are only valid for a particular arrangement of the elements involved. Sometimes, too, he takes as obvious properties that today we force ourselves to demonstrate, such as the existence of a point in common between two intersecting curves (lines, circles, etc.). This problem linked to continuity arises from proposition 1 of book 1 of the Elements, when Euclid constructs with a compass an equilateral triangle on a given segment AB.

# Euler, “our master to all”

Let’s skip two millennia to talk about some of the geniuses of the modern age. The Swiss mathematician and physicist Leonhard Euler (1707–1783) is perhaps the greatest: he wrote a considerable number of mathematical memoirs gathered today in the seventy volumes of his complete works (http: // eulerarchive. maa.org). He made considerable progress in the analysis and Pierre-Simon de Laplace advised: “Read Euler, read Euler, he is everyone’s teacher. However, this prodigious mathematician was sometimes wrong by indicating for example that there was never a closed path of a knight on a 3 × n rectangular chessboard whereas there exists for example for the 3 × 10 chessboards. .

The rider’s route:

The rider’s route. Finding out whether a chess rider can travel all the squares of an m × n checkerboard without going through the same square twice and returning to his starting point (closed course) may seem like a lesser problem. However, we have been interested in it since the tenth century in India and Leonhard Euler devoted a rather long article to it (“Solution of a curious question which does not appear subject to any analysis”, Mémoires de l’Académie Royale des Sciences et Belles Letters, vol. 15, pp. 310–337, year 1759, Berlin 1766).

Among the simplest questions that arise, there is that of chess boards of size 3 × n. When n is odd, there is no solution because, more generally, there is no solution if n and m are odd (will you find out why?). On the other hand, for all even numbers n from 10, there are solutions (see the figure above) contrary to what Euler asserts.

His mistake was not corrected until a century and a half later, in 1917, by Ernest Bergholt. For those interested in this type of problem, the very complete site should be consulted.

It is remarkable that the problem still gives rise to mathematical work, for example Alfred Brown’s master’s thesis, “Knight’s tour and Zeta function”, defended in 2017 at the State University of San José, in the United States.

The rigor of Euler’s reasoning is sometimes insufficient and today we consider incomplete his proof of 1749 of the “fundamental theorem of algebra” on the roots of a polynomial. The way in which he uses the notion of function, which he assimilates more or less to polynomials of finite or infinite degree, lacks rigor and even clarity; It is only in the 19th century that we will be able to resume his precious work on these subjects and put them perfectly in order.

The notion of function in Euler. More boring than the rider question, where Euler made a one-off error, the way he handled the concept of function was the center of an ambiguity and even a contradiction since he wanted both to have a concept close to that of polynomial and state results valid for a large class of functions, without succeeding in defining them precisely. Or rather by proposing a definition lacking in clarity: “A function is an analytical expression composed in some way of this variable quantity and of numbers or of constant quantities” (Introductio in analysin infinitorum, Marcum-Michaelem Bousquet & socios, 1748) .

Today, a function is defined as a relation — a precise set notion — which, at any point of a starting set, associates one in the ending set (a), without any particular constraint. However, the theorems of the analysis that Euler endeavored to specify only concern reduced classes of functions, such as those developable in integer series (b). This is an example of a seemingly serious lack of rigor which, however, does not prevent the progress of a mathematical field which only logically becomes perfect later (see J. Dhombres, “Les presupposés d’Euler in the employment of functional method ”, Revue d’histoire des sciences, vol. 40 (2), pp. 179–202, 1987).

We will not come back to the case of Augustin Cauchy and his famous statement wrongly asserting that a limit of continuous functions is also a continuous function (http://fredrickey.info/hm/CalcNotes/CauchyWrgPr.pdf), nor on Henri Poincaré and his false memoir which earned him the prize in honor of King Oscar of Sweden (https://journals.openedition.org/lettre-cdf/1103) and which he had reprinted at his expense after correction.

David Hilbert, also considered one of the greatest mathematicians and logicians, left many small errors in his articles. When it was decided, in order to publish her complete works, to cleanse them of everything that needed to be put back together, it took Olga Taussky-Todd three years to complete the work. Hilbert was also once seriously mistaken in proposing an erroneous proof of the “continuum hypothesis” that there is no intermediate infinity between that of integers and that of real numbers.

# Write the proofs in the language of logic

The development of logic led at the beginning of the twentieth century to the idea that a correct mathematical demonstration can be controlled mechanically, provided it is written in a language the first sketches of which were proposed by Alfred Whitehead and Bertrand Russell in the three volumes of Principia Mathematica (1910–1913). By slightly completing the Principia method, the basic syntax of which is not completely fixed, we end up with the notion of “formal system” today at the heart of mathematical logic. It is this that gives full meaning to Kurt Gödel’s famous incompleteness theorems, asserting that no formal system will ever be powerful enough for all mathematics and therefore it will always be necessary to seek new axioms.

## The causes of errors

Today, the reasons for the presence of sometimes persistent errors in the works published by mathematicians are multiple and can be classified as follows.

1) Pre-publication control procedures are very imperfect. Some journals try to publish as much as possible and do not exercise a serious filter before accepting the articles they publish. Even when very rigorous, journals entrust expertise to mathematicians who are not paid for their work and remain anonymous, which may not be the best way to engage these experts.

in-depth verification work.

2) Certain fields are very specialized and the published works are known only to very few researchers and therefore very little checked or not at all after their publication. An error present in an article can therefore remain ignored for a long time.

3) Some areas give rise to extremely long and terribly difficult demonstrations, which require considerable control work. This exam can only be done in earnest with the help of evidence-based computer assistants, which for some demonstrations is still difficult. Thereby,

it took several years to formalize and validate Thomas Hales’s proof of the proof of Kepler’s conjecture on the most compact stacking of spheres in space.

The absolute precision made possible by formal systems should have put an end to errors in mathematics. This was only an illusion, because it is very difficult in practice to write these texts which are mechanically verifiable and which provide absolute certainties. Today, we have designed computer programs, “proof assistants”, that develop this proof. These systems do not find new proofs on their own, but help mathematicians write compelling proofs.

## Proof assistants correct errors

Computer systems called “proof assistants” allow mathematicians to write formal, and therefore perfectly complete, proofs without any errors being able to remain. Did this software make it possible to correct errors that went unnoticed? In some cases yes (see the case

of Gödel’s ontological proof in the text of the article). Manuel Eberl is an expert who has been working for

years of writing proofs using the Isabelle / HOL proof assistant. He testifies to his work:

“Normally it takes a very thorough understanding of paper evidence to formalize it and you have to think about how to go about formalizing it. […] If you formalize a particular proof like that of the prime number theorem […], you probably will not find that everything is wrong and everything collapses, but you will run into small problems.

I have found “errors” in many demos, including when they were published in textbooks or articles. Most of these errors are easy to correct, and most

mathematicians would probably consider them to be harmless. Some are identified within days, others actually require changing definitions, adding assumptions, or modifying the theorem statement. Most “errors” fall into the following categories.

Surprisingly non-trivial arguments are declared easy and unimportant. […]

Cases are omitted.

Arithmetic errors are made, such as multiplying both sides of an inequality by a constant that is not verified to be positive.

Unspecified assumptions are used surreptitiously.I am aware that mathematicians often have an indulgent point of view. […] The types of errors that I mention are generally considered insignificant, and it is believed that someone would have corrected them even in the absence of a computer system, and therefore that ultimately the theorems and proofs complained of are correct in meaning, since the problems identified are minor.

However, I don’t agree with that. I want my demonstrations to be as rigorous as possible. I want to be sure that I haven’t missed any guess. “

(For the full text of Manuel Eberl, see: https://mathoverflow.net/questions/291158/proofs assistant / 312661)

Thanks to them, we formalized an important part of basic mathematics, which confirms that we should not have any worries about the central part of mathematics. For the most difficult theorems, whose proofs are very long or use abstract and delicate concepts, we do not yet have these formalized versions.

# Formally proven statements …

Among the statements whose proofs have been verified in this way, we have for example:

- There are exactly five Platonic solids (regular convex polyhedra).

Impossibility of trisection of an angle with the ruler and the compass. - Independence of the postulate of parallels towards the other axioms of the basic geometry of the plane.
- Fundamental theorem of algebra on the roots of polynomials.
- Intermediate value theorem concerning the continuous functions of the set of real numbers in itself.
- Two-squares theorem: an integer is sum of two squares if and only if each of its prime factors, of the form 4k + 3, occurs with an even exponent.
- Fundamental theorem of analysis: the derivative with respect to x of the integral of the function f, taken between a constant a and x, is equal to F(x).
- Liouville’s theorem: the number 1/10^(1!) + 1/10(2!) + 1/10^(3!) + … is transcendent.
- The numbers e and ≠ are transcendent.
- Prime number theorem: the density of prime numbers around n is 1 / log (n).
- Cantor’s diagonal method: real numbers cannot be bijected with whole numbers.
- Gödel’s first incompleteness theorem.
- Four-color theorem: Any geographic map drawn on a plan can be colored with four colors so that two neighboring countries are different colors.
- Kepler’s conjecture about the densest stacking of spheres in space.

# … and others not

The following results have not been checked by proof assistants, but for the first two, specialists agree that the informal proof available is satisfactory.

- Fermat’s theorem which indicates that the equation
*a^n*+*b^n*=*c^n*has no solutions in positive integers if n> 2. - The giant theorem concerning the classification of finite simple groups, the current proof of which occupies tens of thousands of pages scattered in more than five hundred articles. We try to write a shorter (unformalized) demonstration that will be around five thousand pages long.
- The theorem proving the ABC conjecture (it relates to the divisors of numbers verifying a + b = c), which remains the subject of a controversy between mathematicians: some claim to have a proof, others consider that it is not satisfactory.

Fields such as algebraic geometry present considerable difficulties and it seemed necessary to several mathematicians, including Fields medalist Vladimir Voevodski (1966–2017), to work so that the work of this field is all formalized to avoid the too frequent demonstrations. false, which even the best specialists had difficulty eliminating

# Application to the ontological proof of the existence of God

A rather strange and amusing recent case has highlighted the interest of proof assistants, which are now used to spot and correct errors even in controversial philosophical reasoning.

It happened to Gödel, considered as the greatest logician to have existed, to be mistaken: in 1970, he submitted to Alfred Tarski, for publication in Proceedings of The National Academy of Sciences, a demonstration based on new axioms that the The continuum assumption was wrong. The erroneous evidence was never published.

But the most interesting case of Gödel’s error concerns a philosophical-logical work on the question of whether pure logical reasoning is capable of proving the existence of God. Anselm of Canterbury in the 11th century and then Gottfried Leibniz in the 17th century had proposed such reasoning using the ideas of perfection, necessity and existence combined as rigorously as possible. These are logical games which have the advantage of allowing analyzes of concepts, especially when they are made precise by completely axiomatizing them, which Gödel attempted to do. Whether one arrives at the conclusion that God exists, or that one arrives at the conclusion that the modes of reasoning which one wants to implement to prove it produce contradictions or unacceptable statements, is interesting on a logical level .

## Ontological proof

Saint Anselm of Canterbury (1033–1109), then Gottfried Leibniz (1646–1716), then the Austrian logician Kurt Gödel (1906–1978) each in turn proposed reasoning using only logical considerations to conclude that God exists.

Anselm’s ontological reasoning in its crude version is: “God has all the qualities, therefore the quality of existence, therefore God exists. “More precisely:” God, by definition, is that of which nothing greater can be conceived. God exists in human understanding. If God exists in human understanding, we can imagine him greater, considering that he also exists in reality. Therefore, God exists. “

A more elaborate version was given by Leibniz and it is this that Gödel tried to improve further. These versions are more complex and use notions of modal logic (such as “it is necessary”, “it is possible”, etc.), logics of which formal versions were proposed and studied as early as 1918 by Clarence Lewis, then deepened throughout. long twentieth century.

The validity of these arguments rests on the individual acceptance of the starting axioms and definitions which can be discussed endlessly. It also depends on the validity of reasoning when we have accepted axioms and definitions. This validity can be checked using the computer tools available today. However, even if one accepts the axioms individually and the reasoning is judged correct, it is still possible that the axioms are generally unsatisfactory because they lead to obvious absurdities. This is what happened with the reasoning proposed by Gödel.

In February 1970, believing he was dying, Gödel allowed mathematician Dana Scott (b.1932) to copy the latest version of his reasoning. A little later, Gödel told his friend the mathematician and economist Oskar Morgenstern (1902–1977) that he was satisfied with his proof, specifying that he did not want to publish it because he was afraid that it would be deduce that he believed in God, when he was only engaged in a logical exploration.

Recently, Christoph Benzmüller, of the Free University of Berlin, and Bruno Woltzenlogel Paleo, of the Australian National University, showed, using proof assistants, that the system used by Gödel was absurd (see their article cited in the bibliography).

The work of Christoph Benzmüller and David Fuenmayor and a few researchers around them have answered questions that might be asked. By using systems of demonstration and automatic verification of proof, on the one hand, they established that the system of axioms used by Gödel in his ontological proof was contradictory, which of course deprives his reasoning of any value. However, by studying certain variants proposed for example by Dana Scott or Melvin Fitting, they were able to validate these variants of the reasoning — which, this time, are based on systems which do not produce obvious absurdities and succeed without committing any fault of reasoning to the conclusion that God exists, at least a god such as the adopted axioms envisage it. We therefore have systems of axioms passed through the computer mill which we are sure are not stupid, and which leave room for reasoning about the existence of God!

These examples of using formal logic and computer validation systems to advance questions of a philosophical nature should encourage philosophers to make their assumptions and the course of their reasoning as precise as possible. According to Christoph Benzmüller, this work opens up “new perspectives for a theoretical philosophy assisted by computer. Critical discussion of the underlying concepts, definitions and axioms remains the responsibility of humans, but the computer can help construct and verify that logical arguments are rigorously correct. In the event of a conflict, the computer will decide between the conflicting arguments and satisfy the recommendation of Leibniz Calculemus (“Let’s calculate!”). The dream of a perfect language, the Characteristica Universalis, which Leibniz wanted to create, where right and wrong would be identifiable by simple calculation, is therefore partly realized.

# No progress without risk …

It is because mathematics is advancing and entering fields of far greater complexity than anything known in previous centuries that it takes risks. The desire to go as far as possible creates a danger that must be accepted, on condition that everything is done that can limit it.

On the border between the true and the false, the ground becomes uncertain; mathematics must therefore invent and perfect their control tools and, in certain fields, use them systematically. The errors that occur are not the proof of a general and increasing risk to which mathematics would be subjected, for the guarantees are reinforced in the central part which extends, but the manifestation of the vitality of a curious and thirsty science. , and which has always explored and cleared new lands without respite.