# Reply to comment

## This is not a carrot: Paraconsistent mathematics

Paraconsistent mathematics is a type of mathematics in which contradictions may be true.
In such a system it is perfectly possible for a statement *A* and its negation *not A* to both be true. How can this be, and be coherent? What does it all mean?
And why should we think mathematics might actually be paraconsistent? We'll look
at the last question first starting with a quick trip into mathematical history.

### Hilbert's programme and Gödel's theorem

David Hilbert, 1862 - 1943.

In the early 20th century, the mathematician David Hilbert proposed a project called Hilbert's programme: to ground all of mathematics on the basis of a small, elegant collection
of self-evident truths, or *axioms*. Using the rules of logical inference, one should be able to prove all true statements in mathematics directly from these axioms. The resulting theory should be *sound* (only prove those statements that really are true), *consistent* (free from contradictions) and *complete* (it should be able to either prove or disprove any statement). One should also be able to
recognise that the axioms are sound
by finitary
means — that is, minds limited to finitely many inferences, such as human minds,
should be able to recognise the axioms as sound.

However, Kurt Gödel famously
proved that this was impossible, at least in the sense that mathematicians of the time
had in mind. His first *incompleteness theorem*, loosely stated, says that:

*In any formal system that is free of contradictions and captures arithmetic, there are statements which cannot be proven true or false from within that system.*

As an example, consider a formal theory *T*, that is a system of mathematics based on a collection of axioms. Now consider the following statement *G*:

*G*: *G* cannot be proved in the theory *T*.

If this statement is true, then there is at least one unprovable sentence in *T* (namely *G*),
making *T* incomplete. On the other hand, if sentence G can be proved in *T*, we reach a
contradiction: *G* is provable, but by virtue of its content, can also not be proven. There
is a dichotomy: we must choose between incompleteness and inconsistency. Gödel
showed that a sentence such as *G* can be created in any theory sophisticated enough
to perform arithmetic. Because of this, mathematics must be either incomplete or
inconsistent. (See the *Plus* article Gödel and the limits of logic for more on this.)

Classically-minded scholars accept that mathematics must be incomplete, rather than inconsistent. In line with common intuitions they find contradictions, and inconsistency, abhorrent. However, it is important to note that accepting a small selection of contradictions need not commit you to a system full to the brim with contradictions. We shall explore this idea further shortly. For now, let's turn to a couple of cases where an paraconsistent position can provide a more elegant solution than the classical position: the paradoxes of Russell and the liar.

### Russell's paradox

During his work attempting to establish the logical foundations of mathematics, Betrand
Russell discovered a paradox, now eponymously known as *Russell's paradox*. It concerns mathematical *sets*, which are just collections of objects. A set can contain other sets as its members, consider for example the set made up of the set of all triangles and the set of all squares. A set can even be a member of itself. An example is the set T containing all things that are not triangles. Since T is not a triangle, it contains itself. Russell's paradox reads as follows:

*Let R (the Russell set) be the set of all sets that are not members of themselves. Is R a member of R?*

Bertrand Russell, 1872-1970.

To be a member of itself, R is required not to be member of itself. Thus if R is in R, then R is not in R, and vice versa. It looks like a fairly serious problem. So-called naive set theory is not equipped to deal with such a paradox. Classical mathematics is forced to endorse a much more complicated version of set theory to avoid it. We will look at the classical response, and then a paraconsistent approach. But first, what is naive set theory?

It is founded on two principles:

- The
*principle of abstraction*, which states (roughly) that given any property, there is a set collecting together all things which satisfy that property. For example, "being black" is a property, so there is a set consisting of all black things. - The
*principle of extensionality*, which states that two sets are the same if and only if their members are the same.

These principles capture an intuitive understanding of what sets are and how they work.
However, to avoid contradictions and paradoxes, classical mathematicians regularly
adopt a more complicated stance, accepting a more complex version of set theory called *Zermelo-Fraenkel set theory* (ZF). It discards the principle of abstraction, and replaces
it with around eight more involved axioms. These postulated axioms change the way one is able to create a set.

### A hierarchy of sets

Given a set the *power set* is the set of all subsets of .

For example, if , then

where is the empty set.

We can build a hierarchy of sets from the empty set as follows:

and so on.

The Von Neumann hierarchy is richer than this, incorporating notions of infinity, but the construction works in a similar spirit.

In general, to create a set in ZF one uses pre-existing sets to make more (see the box on the right for an idea of how the process works). Certain
sets, such as the empty set, exist without needing to be constructed. The collection
of sets that can be formed by building in such a way is referred to as the *cumulative
hierarchy* or the *Von Neumann universe*.
The sets built in ZF are given a rank based on
how many times one has used the set building rules to create them. The empty set is
rank 0, those built from the empty set directly are rank 1, and so forth.

In ZF, Russell's set cannot exist, and thus Russell's paradox is avoided. Sets are built from the bottom up; you first need to have hold of a set before you can include it into another set. To create the Russell set, the Russell set is required, so building it using the axioms of ZF is impossible. Couching this in terms of the rank of the set, Russell's set would need to be of some rank, n, but also n+1 (and n+2 and n+3 and so forth), because to be created it needs to be of a higher rank than itself. As this is not possible, the Russell set cannot be built.

ZF avoids Russell's paradox, but at a cost. Instead of a set theory based on two
simple premises, we are left with a much more complicated system. Complicated
does not imply incorrect, however in this case it is difficult to motivate the array of
different axioms which are needed for ZF. One can accuse the axioms of being ad
hoc: used to avoid a particular problem rather than for a coherent, systemic reason.
Moreover, ZF is an unwieldy system. Using a similarly complicated system, Russell
and Whitehead needed 379 pages of work to prove that 1+1=2 in their *Principia
Mathematica*, published in 1910 (here's the relevant page).

Because of this, most mathematicians use something akin to naive set theory in their informal arguments, though they probably wouldn't admit it. There is a certain reliance on the idea that whatever their informal argument is, it is in principle reducible to something in a system such as ZF, and the details are omitted. This assumption may be problematic, especially where some very complicated results are supposedly proved. Classical mathematics does not appear to have the stable, workable, contradiction-free foundations that classical mathematicians hoped for.

### The liar paradox

While Russell's Paradox is clearly directly applicable to mathematics, one can motivate paraconistency in mathematics indirectly through paraconsistent logic. If logic is paraconsistent, then mathematics built on this logic will be paraconsistent. Let us take a brief breather from mathematics and look at natural language. For millennia, philosophers have contemplated the (in)famous liar paradox:

*This statement is false.*

Alfred Tarski, 1902-1983.

To be true, the statement has to be false, and vice versa. Many brilliant minds have
been afflicted with many agonising headaches over this problem, and there isn't a
single solution that is accepted by all. But perhaps the best-known solution (at
least, among philosophers) is *Tarski's hierarchy*, a consequence of *Tarski's undefinability
theorem*.

In a nutshell, Tarski's hierarchy assigns semantic concepts (such as truth and falsity) a level. To discuss whether a statement is true, one has to switch into a higher level of language. Instead of merely making a statement, one is making a statement about a statement. A language may only meaningfully talk about semantic concepts from a level lower than it. Thus a sentence such as the liar's sentence simply isn't meaningful. By talking about itself, the sentence attempts unsuccessfully to make a claim about the truth of a sentence of its own level.

The parallels between this solution to the liar paradox and the ZF solution to
Russell's paradox are clear. However, looking at this second case shows that paradox
or inconsistency is not merely a quirk of naive set theory, but a more widespread
phenomenon. It seems that to avoid inconsistency, classicists are forced to adopt
some arguably ad hoc rules not just about the nature of sets, but also about meaning. Besides, it intuitively *seems* that the liar sentence should be meaningful; it can be written down, is grammatically correct, and the concepts within it understood.

### Explosive logic

How does a paraconsistent perspective address these paradoxes? The paraconsistent response to the classical paradoxes and contradictions is to say that these are interesting facts to study, instead of problems to solve. This admittedly runs counter to certain intuitions on the subject, but from a paraconsistent perspective, localised contradiction such as the truth and falsity of the liar sentence, does not necessarily lead to incoherence. How is this different from the classical view? For classicists, what is so bad about contradiction?

Every mathematical proof is, in some way, a deduction from a specified collection of
definitions and/or axioms, using assumed rules of inference to move from
one step to the next. In doing this, mathematics is employing some type of logic or
another. Classical mathematics uses classical logic, and classical logic is *explosive*.

Because of Russell's paradox this page is a carrot.

An explosive logic maintains that from a contradiction, you may conclude quite
literally anything and everything. The logical principle is *ex falso quodlibet*, or "from
a falsehood, conclude anything you like". If A and not-A are both true, then
Cleopatra is the current Secretary-General of the United Nations General Assembly,
and the page you are currently reading is, despite appearances, also a carrot.

So why is classical logic explosive? Because it accepts the argument form *reductio ad absurdum* (RAA), meaning reduction to the absurd. We
will see below that paraconsistent logicians can use a modified version of RAA, but
for now let's just consider the classical version. To use classical RAA, one first makes
an assumption. If further into the proof a contradiction arises, one is entitled to
conclude that the initial assumption is false. Essentially, the idea is that if assuming
something is true leads to an "absurd" state of affairs, a contradiction, then it was
incorrect to make that assumption.

This seems to work well enough in everyday situations. However, if contradictions can exist, say if Russell's set both is and is not a member of itself, then we can deduce anything. We merely have to assume its negation, and then prove ourselves "wrong". Thus contradiction trivialises any classical theory in which an inconsistency arises. Naive set theory, for example, is classically disinteresting, because it not only proves that 1+1=2, but also that 1+1=7. All because of Russell's paradox. So to the classical mathematician, finding a contradiction is not just unacceptable, it is utterly destructive. There is no classical distinction between inconsistency (the occurrence of a contradiction) and incoherence (a system which proves anything you like).

Paraconsistent logic does not endorse the principle of explosion *ex contradictione
quodlibet*, nor anything which validates it (notice the subtly different wording; "contradictione"
in place of "falso"; this will become important later). The thought is
this: suppose I have a pretty good theory that makes sense of a lot of the things I
see around me, and suppose that somewhere in the theory a contradiction is hiding.
Paraconsistent logicians hold that this does not (necessarily) make the theory incoherent,
it just means one has to be very careful in the deductions one makes to avoid
falling from contradiction into incoherence. For the most part, it makes no difference
to us if the liar sentence really is both true and false, and the paraconsistent perspective
reflects that. By removing RAA (or altering it as we see below), and making a
few other tweaks to classical logic, we can create a logic and mathematical system
where contradictions are both possible and sensible.

### Classicists knew they were inconsistent

A donkey in your bedroom?

There are further motivations for paraconsistency beyond those mentioned above. One such motivation is historical: at various times mathematicians worked with theories that they knew at the time to be inconsistent, but were still able to draw meaningful and useful conclusions. Set theory is one such area. The early calculus, as proposed by Isaac Newton, was another; its original formulation required that a quantity be small but non-zero at one stage of a calculation, but then to be equal to zero at a later stage. Despite the inconsistencies, mathematicians still adopted these theories and worked with them, drawing useful and sensible conclusions despite the presence of contradictions.

Another motivation is the question of relevance of inference. That is, suppose I have proved that the Russell set is and is not a member of itself. Why should it follow from this that there is a donkey braying loudly in my bedroom? The question of relevance (just what has a donkey to do with set theory?) is one that has plagued classical logic for a long time, and is one that makes classical logic a hard pill to swallow to first-time students of logic, who are often told that "this is the way it is" in logic. Fortunately for those students, paraconsistency provides an alternative.

### Paraconsistent mathematics

Paraconsistent mathematics is mathematics where some contradictions are allowed. The term "paraconsistent" was coined to mean "beyond the consistent". The objects of study are essentially the same as classical mathematics, but the allowable universe of study is enlarged by allowing some inconsistent objects. One of the main projects of paraconsistent mathematics is to determine which objects are inconsistent, and which inconsistencies are allowed in a theory without falling into incoherence. It is a fairly recent development; the first person to suggest paraconsistency as a possible foundation of mathematics was Newton da Costa from Brazil (1958). Since then various areas have been investigated through the paraconsistent lens.

An important first step towards developing paraconsistent mathematics is establishing
a tool kit of acceptable argument forms. One charge that has been levelled
against the paraconsistent mathematician is that the classical version of RAA is not allowed.
Proofs by contradiction, *reductio ad contradictione*, are no longer allowed, since
the conclusion could be a true contradiction, and the logic must allow for this case.
Similarly, *disjunctive syllogism* is lost. Disjunctive syllogism states that if I can prove
that A or B is true, and I can prove that A is false, then B must be true. However,
paraconsistently, if A and not-A is a true contradiction, then B cannot be validly deduced.
We do not receive any information about the truth of B from the fact A is not
true, because it might also be true, thus satisfying the disjunction.

Paraconsistentists are able to salvage a form of RAA. The classical mathematician
does not distinguish between a contradiction and total absurdity; both are used to reject
assumptions. However, from the paraconsistent viewpoint, not all contradictions
are necessarily absurd. To someone with this view, classical RAA actually equates to
*reductio ad contradictione*. The paraconsistentist can use a form which allows them
to reject something which is genuinely, paraconsistently absurd. This take on RAA is
used to reject anything which leads to a trivial theory (a theory in which everything
is true). Likewise, while *ex contradictione quodlibet* (from a contradiction, anything
follows) is out, *ex absurdum quodlibet* is still valid.

The Penrose triangle.

Allowing inconsistencies without incoherence opens up many areas of mathematics
previously closed to mathematicians, as well as being a stepping stone to making
sense of some easily described but difficult to understand phenomena. One such area
is inconsistent geometry. M. C. Escher's famous drawings, for example, often contain
impossible shapes or inconsistent ideas. His famous *Waterfall* depicts a waterfall
whose base feeds its top. The *Penrose triangle* is another well-known example, the
sides of which appear simultaneously to be perpendicular to each other and to form
an equilateral triangle. The *Blivet* is another, appearing comprised of two rectangular
box arms from one perspective, but three cylindrical arms from another. These pictures
are inconsistent, but at the same time coherent; certainly coherent enough to
be put down on paper. Paraconsistent mathematics may allow us to better understand
these entities.

Paraconsistency can also offer new insight into certain big-and-important mathematical topics, such as Gödel's incompleteness theorem. When Gödel tells us that mathematics must either be incomplete or inconsistent, paraconsistency makes the second option a genuine possibility. Classically, we assume the consistency of arithmetic and conclude that it must be incomplete. Under the paraconsistent viewpoint it is entirely possible to find an inconsistent, coherent and complete arithmetic. This could revive Hilbert's program, the project of grounding mathematics in a finite set of axioms: if the requirement for consistency is lifted, it may be possible to find such a set.

The blivet, also known as the space fork.

Another famous problem that appears in a new light under paraconsistency is the *halting problem* in computer science. It is the problem of
finding an algorithm that will decide if any given algorithm working on any given
input will ever halt. It is an important concern when addressing whether an algorithm
will reach a solution to a problem in finite time, and is equivalent to many other decision problems
in the discipline. However, consistent computer programs are unable to solve the
problem, as famously proved by Alan Turing (see What computers can't do for a sketch of the proof). Paraconsistency re-opens the door to
finding a solution.

Paraconsistency in mathematics: mathematics where contradictions may be true. Is it as outlandish as it sounds? Probably not. As we have seen, paraconsistent mathematics elegantly deals with paradoxes to which classically mathematicians have had to find ad-hoc, complicated solutions to block inconsistency. There are also many areas in which paraconsistent mathematics may provide meaningful insights into inconsistent structures. It offers new insights to old problems such as Hilbert's program and the halting problem. Paraconsistency in mathematics: an interesting and promising position worthy of further exploration.

### Suggested reading

*Inconsistent Mathematics*by Chris Mortensen.*In Contradiction*by Graham Priest.- The Stanford Encyclopedia article on paraconsistent logic.
- The Internet Encyclopedia of Philosophy article on inconsistent mathematics.

### About the author

Maarten McKubre-Jordens is a postdoctoral fellow at the University of Canterbury. As well as actually performing mathematics, he thinks about the foundation of mathematics in human reasoning. He brews his own beer, and loves to spend time with his family.

He wishes to thank his wife Alexandra, for her virtually limitless patience in making this article user-friendly.