Undecidability


While there are many branches of knowledge, each having its own problems and methods, `Undecidability' is a subject dealing with the very nature of problems itself. Given any problem, does it have a solution? Is there any method to find the solution? These are the kind of questions which this subject tries to address. In what follows, we shall see the answers to these questions. But first, we need precise definitions of what is a problem, a solution, a method and a few entertaining mathematical results.


How many numbers are there?

Infinite ofcourse. But which infinity?! We shall soon be able to conclude that there is more than one kind of infinity. The infinity corresponding to the number of real numbers is much, much greater than the infinity corresponding to the number of integers. We will show this result using a method called as the diagonal slash, invented by Georg Kantor.

To show that two sets have the same number of elements, we need to show that for every element in one set, there is a unique element in the other set, and vice versa. That is, there exists a one-one correspondence between the elements of the two sets. If we can show that no such one-one correspondence exists, then the 2 sets have different number of elements. Technically, we say that the 2 sets have different cardinalities.

To see that the set of integers and the set of real numbers have different cardinalities, assume for the time being that they have the same cardinality. This means that for every real number, there is a unique integer. Then, we can speak of something like the 1st real number, 2nd real number, and so on. Let us list out all the real numbers as in the diagram that follows. We will list out only the real numbers between 0 and 1.

Positive Integers Real Numbers
1 0.000100261.....
2 0.012139764.....
3 0.261715801.....
4 0.314612973.....
. .
. .
. .

Now consider the digits on the diagonal elements marked red in the above diagram. This corresponds to a real number - 0.0116... Now for each digit here, let us increase it by 1. If the digit 9 is present, we just make it 0, and not 10. So we get a new number - 0.1227... Now this number cannot correspond to the first number on the above list, because it differs in the 1st position. This is because we increased that digit by 1 to obtain the new number. Similarly, the new number cannot be equal to the 2nd number in the list, because it differs in the 2nd digit. In this manner, the new number cannot correspond to any number in the list. Hence we have a new real number which does not correspond to any integer. It follows that there is no one-one correspondence between the set of integers and real numbers. There are more number of real numbers than there are integers.

At first sight the result that there are more real numbers in the range 0 to 1 than there are positive integers, seems unbelievable. Afterall, if we remove the front decimal point of any real number, dont we get a unique integer? But what about the non-terminating decimals such as 0.333..., 0.444..., PI, etc. If we remove the front decimal point, all of them map to infinity. Ah, got you there! Which infinity are you talking about now ? Why can't we define different infinities for each of the fractions such as 0.333..., 0.444..., PI, etc. This is not any more absurd than defining 2 different infinities - one for integers and one for real numbers. It is just that instead of 2 infinities, now we have an infinity of infinities. While this view point is perfectly valid and consistent in itself, the former view point of having only 2 infinities, is useful in many arguments. Even the many-infinities view point agrees that there are 2 infinities - the smallest of the infinities and the largest of them, which are different. The smallest of the infinities corresponds to the point at which we depart from the finite world. For instance, when we are talking about lengths of strings that can be written on a piece of paper, there is one point when we depart from finite strings to infinity. After that, it doesn't make sense to talk of a string of length (0.333...)-infinity and one of (0.444...)-infinity. From now on, we shall stick with the view point of 2 infinities. The smaller corresponds to the number of integers (called as countably infinite) and the larger corresponds to the number of real numbers (called as uncountably infinite). In this sense, we will not have a (0.333...)-infinity and a (0.444...)-infinity etc. All of them will be equal and will be equal to the smaller infinity.

From the above argument we can conclude, that indeed, the number of real numbers is far greater than the number of integers - an infinite times greater!


The number of Strings

A string is a sequence of symbols put one after another. The symbols are taken from some finite alphabet. Intuitively, anything that we can write on a piece of paper is a string. This means that any mathematical proof is a string; any procedure that we can write on a piece of paper is a string etc. It is therefore of interest to know how many different strings are possible using a finite alphabet. We shall show that this number is countably infinite. For this to be true, there must be a one-one correspondence between integers and strings.

Let the number of symbols in the finite alphabet be n. The number of strings of length 1 are n. Let s1, s2, ..., sn be these strings. The number of strings of length 2 is equal to the number of ways of choosing 2 symbols out of n, and then arranging them. This is represented by P(n,2). Let these strings be sn+1, sn+2, ..., sn+P(n,2). Continuing in this way, we can label all the strings possible as some si. Thus given any positive integer i, we have a string si and given any string, it will correspond to some i. Thus there is a one-one correspondence between strings and positive integers. It follows that the number of possible strings is countably infinite.

Given any finite alphabet, we can always order the strings of that alphabet in the manner described in the previous paragraph. Such an ordering is called as a canonical ordering. Really, it is nothing more than our ordinary concept of alphabetical order. It allows us to think in terms of 1st string, 2nd string, and so on.

From the above discussion, we can conclude that even mathematical proofs can be enumerated like strings. So we can talk of the 1st proof, 2nd proof and so on, and be sure that we have not left out any proof in between. Similar statements apply for anything that we can write on a piece of paper.


Turing Machines

How does one solve a problem of any kind? Basically, you are given the problem on a piece of paper, using symbols of some language. You carry out some computation. This involves elementary operations such as writing some symbols (intermediate results) on the paper and remembering how much of the computation is done.

Having observed that any procedure can be accomplished using such simple operations, mathematicians came up with a general way of representing any method whatsoever. For any procedure, you can represent it using a turing machine. A turing machine is a mathematical object, in the sense that it will never get damaged.(oh!) If a problem is solvable using elementary operations such as above, then we can design a turing machine to solve it. Depending on the problem that we wish to solve, the details of the turing machine will vary.

A turing machine consists of a long tape on which we can put symbols. This corresponds to our piece of paper, except that the tape is allowed to be as long as you want. The tape has cells on it and each cell can have at most one symbol. It could also be a blank. The number of possible symbols, that is the alphabet, is finite.

The turing machine has a tape head, by which it can read the cells of the tape and write onto them. It may also erase a symbol. The tape head can read or write to only one cell at a time. At any point of time, the turing machine reads from the current cell.

Besides the current cell, the turing machine also needs to keep track of what is known as the state of the turing machine. At any point during operation, the turing machine is in one of a finite number of pre-defined states. The turing machine starts in an initial state. Since the number of possible states is finite, the turing machine may come back to a state after leaving it. The state of the turing machine could be used to encode what symbols it has recently seen and also how much of the computation has been done.

Initially the input problem is specified on the tape. The turing machine operates on a step by step basis. Each step of the turing machine is as follows -

All these operations of the turing machine correspond to the kind of tasks we do while solving any problem on a piece of paper.

In summary, a turing machine is completely specified by the following -

  1. A finite set of states, one of which is the initial state. Some states are designated as final states. The use of this will become clear when we look at other interpretations of turing machines.
  2. A finite set of allowable tape symbols, one of which is the blank. Some tape symbols are designated as input symbols. These are symbols with which the initial problem will be represented and the final output will be given. Tape symbols which are not input symbols could be used to store intermediate results.
  3. A rule to determine, depending on the state of the turing machine and what it has just read from the current cell, -
  4. The position where the leftmost symbol of the input will be present on the tape.

One thing to notice is that every turing machine can be specified using a finite number of symbols, chosen from some finite alphabet. That is, it can be written down on a piece of paper. So the number of possible turing machines is at most countably infinite. While this may appear obvious, it is central to the argument that shows that - "There are problems for which there may be no method to determine their solution."


Interpretations of a Turing Machine

Till now we have been looking at turing machines as representing methods to solve problems; that is, as problem solvers. However, there are other ways of looking at turing machines.

Turing machines as Computer Programs: Any computer program can be written using the following -

  1. `if' statements: To verify if some value is in some range.
  2. `goto' statements: To transfer control from one part of the program to another.
  3. input/output statements.

Turing machines can achieve the same effect because of the following observations -

At each step, the turing machine reads the tape and depending on what is read, it may change state etc. Depending on the state, the turing machine may perform different operations. This can be used to achieve the effect of an `if' statement.

	if the value on the tape is in some range
		change to state s1
	else
		change to state s2

	if the current state is s1
		perform these operations
	else
		perform these other operations

The effect of a `goto' statement can similarly be achieved by making the turing machine to change to some state s. We can then make it perform a different sequence of operations while in that state.

Input/output statements ofcourse, correspond to reading and writing on to the tape of the turing machine.

Thus the working of any computer program can be simulated by a turing machine.

On the other hand, it is not difficult to write a computer program to simulate any turing machine, as the working of turing machines consists of only a few simple operations. However, computers have limited memory while turing machines have infinite tapes. However, if we are only bothered in what the computer program is capable of doing, and not about the computer and its memory, then we can see that there is no inherent difference between turing machines and computer programs. From now on, you can think of turing machines as simplified abstractions of computer programs.

Turing machines as Language Recognizers: Formally, a language is a set of strings, where each string is a sequence of symbols, taken from some finite alphabet. Now, we know that in a turing machine, initially the input problem to be solved is specified on the tape, by a sequence of symbols. Finally the turing machine may halt in some state. We can think of the turing machine as follows - Initially, there is an input string on the tape. Finally if the turing machine halts in a `final state', then it accepts the input string. The set of all strings which the turing machine accepts, constitutes a language. The turing machine is said to recognize that language.

Turing machines as Language Enumerators: Imagine that there is an output portion of the tape, where, if a symbol is written once, it is never erased or changed. You can imagine the output portion of the tape to be a separate output tape if you want. Now suppose that our turing machine writes sequences of symbols on the output tape, such that each string is enclosed in a pair of marker symbols #string#. The set of all such strings constitute a language and our turing machine has enumerated all the strings of that language.


Church Turing Thesis

The class of problems that can be solved using turing machines is very large. Any problem that can be solved by writing computer programs can be solved by turing machines. A natural question to ask is whether there are any formalisms which are more powerful than turing machines. That is, can they accept a larger class of languages or compute a larger class of functions. None have been found so far. Nor is there any hope.

Our discussion of turing machines started with the notion of solving problems on a piece of paper, using certain elementary operations. This is probably the most general way of specifying methods of solving problems. So the concept of turing machine corresponds to what we intuitively think of as a procedure. If there is a method to do anything, then it can be done by some turing machine. This assumption is called as the Church Turing thesis or as Church's hypothesis. We call it a hypothesis, not because some day it may be proved or disproved, but because we are talking about our `intuitive notion'. How do we know if everyone has the same intuition? But the arguments in favour of the hypothesis are so strong that we cannot doubt its validity.


What is a Problem ?

Having understood what a method is, and how to represent it, let us now move on to understanding what `problems' are. All of us know intuitively what a problem is. But when we want to make statements that apply to all problems, or to large classes of problems, we need to be very precise.

A problem is a question which asks for a yes-no answer about an entity. The entity comes from a domain of possible entities, usually infinite in number. The problem has an answer for any given entity in the domain. When we reduce the problem to ask the question about a particular entity, then what results is called as an instance of the problem. If the domain of entities is infinite, then we have infinite instances of the problem. We restrict ourselves to problems which have countable number of instances. (Countable means finite or at most countably infinite.) So each instance can be represented by a string over a finite alphabet.

A problem can then be defined as the set of all its instances and their (yes-no) answers.

Many general problems can be cast into the form above. Are there any problems which cannot? Surely, any problem which asks questions about real numbers cannot be cast into this form, because given any real number, there may be no way to represent it finitely. The number of problem instances are uncountable. Thus the above definition of problem does not encompass all problems that we can think of. However, we can show that even in this restricted class of problems, there are some which no method can solve. This result will therefore apply to even the more general intuitive notion of problem that we have.

For the kind of problems we are interested in, each instance can be represented by a string over a finite alphabet. Some of these strings (instances) have yes-answers. Some have no-answers. We define a language for any given problem as the set of strings (instances) which have yes-answers.

If we have a method (a turing machine) which, given any instance (string), can decide whether that instance has a yes-answer or a no-answer, then the problem is said to be decidable. Otherwise the problem is undecidable. What do we mean by saying that the turing machine `decides' whether the answer to the instance is yes or no ? Given an instance as input, if the turing machine halts in a final state, then it recognizes that the answer to the instance is yes. If it halts in a non-final state, then it says that the answer to the instance is no. Note that for some instances, the turing machine may not halt at all.

We will soon see that there are problems which are undecidable. Infact, there are much more such problems than there are decidable problems.


The Number of Problems

Recall that a problem is nothing more than the set of all its instances and their (yes-no) answers. The number of instances is at most countable, i.e. it corresponds to the number of integers. The number of problems is then, the number of ways of assigning yes or no to each integer. Following the same Kantor's diagonal slash method, it can be shown that this number is uncountable.

Assume that the number of problems can be in one-one correspondence with positive integers. Let us try to list all possible problems -

Positive Integers (Problems) Instance 1 Instance 2 Instance 3 ...
Problem 1 yes no no ...
Problem 2 no no yes ...
Problem 3 no yes yes ...
. . . . ...
. . . . ...
. . . . ...

Consider the entries in the diagonal elements marked red. This corresponds to a problem - ( yes,no,yes,... ). Now invert this problem, i.e. make all yes's as no's and no's as yes's. We get a new problem - (no,yes,no,...). Clearly, this problem cannot be the first in the list since it differs in the first entry. It cannot be the second, as it differs in the second entry, and so on. Thus this problem is not there in the list. Hence there are more problems than there are positive integers. Clearly, we see here the same character as we saw for real numbers. Thus the number of problems is uncountable.

Recall that the number of turing machines is countable, since each turing machine can be represented finitely. Also each turing machine recognizes only one language, which is the set of strings (instances) for which it halts in a final state (has a yes answer). Thus each turing machine can solve at most one problem. This leads us to the conclusion that there must be some problems which no turing machine can solve. Since the number of problems corresponds to the number of real numbers, which is much, much greater than the number of integers, it follows that there are much more undecidable problems than decidable ones.

The fact that the number of problems is uncountable means that there can be no way to represent all problems finitely (on a piece of paper). But perhaps, those problems that can be represented finitely are all decidable. Unfortunately this is not true, as can be seen in the next section. Some problems that can be stated in a very simple way are undecidable. But may be, for those problems that can be represented finitely, we can atleast find out whether they are decidable or not. To continue this discussion, we need a way of representing problems.

Since a problem is a set of its instances, it can be represented by enumerating solutions to all its instances. In this case, the problem is ofcourse, decidable. Another way of representing a problem is to give a rule to compute its solution, when given an instance of it. The most general way of stating a rule (by the Church Turing Thesis), is to provide a turing machine that encodes the rule. Further every turing machine represents some rule, and hence some problem. Thus, for every finitely representable problem there is a turing machine; and every turing machine represents some problem.

Then the task of determining whether a problem is decidable or not, reduces to determining whether the corresponding turing machine will halt for all instances of the problem or not. The next section shows that this cannot be done in general. Hence there can be no method which can determine for all problems, whether it is decidable or not.


The Halting Problem

An example of an undecidable problem is the famous halting problem - Given a turing machine with an initial input string, will the turing machine halt or not? Thus an instance of this problem consists of a given turing machine and an input string. The answer to the problem is "yes" if the turing machine will halt on that input. Otherwise it is "no".

The diagonal slash argument can be applied here to show that the halting problem is undecidable. Assume that the problem is decidable. Then there exists a turing machine which answers yes or no for each instance of the problem. Let its answers correspond to those in the table below -

Positive Integers (Turing Machines) Input 1 Input 2 Input 3 ...
Turing Machine 1 yes no no ...
Turing Machine 2 no no yes ...
Turing Machine 3 no yes yes ...
. . . . ...
. . . . ...
. . . . ...

We cannot apply the diagonal slash directly like in previous cases, since we know that both the number of rows and columns are countable. This is possible because we are not enumerating all possible sequences of yes's and no's.

However, consider the diagonal sequence - ( yes,no,yes,...). If the halting problem is decidable, then we can construct a turing machine which, when given the ith input string, will look up the diagonal entry in the ith column. If the entry is "yes" then our turing machine is constructed such that it will purposely go into an infinite loop. If it is "no", then our turing machine will halt.

Now this constructed turing machine must be in the table some where, since we have enumerated all turing machines. Its entries will look like - (no,yes,no,...) - exactly the reverse of the diagonal sequence. But such a turing machine cannot be the first in the list as it differs in the first position. It cannot be the second in the list as it differs in the second position, and so on. This is a contradiction. Hence our assumption that the halting problem is decidable, must be wrong. Hence proved.

A more intuitive proof can be given as follows -

The halting problem requires us to construct a turing machine which will accept any other turing machine as input and answer whether the latter will halt on some arbitrary input. This is like a turing machine which has the power to say - "Whoever you are, I can tell whether you will halt or not."

The behaviour of any turing machine can be determined by actually running it. Also, a turing machine can be a sub-routine within a larger turing machine. So there can always be some clever turing machine which will can determine the output of our powerful turing machine and say - "If you tell me that I will halt, I will not; and if you tell me that I will not, then I will." Thus our idealized powerful turing machine that solves the halting problem cannot exist.

This argument can be easily generalized to show that any problem which asks about the behaviour of an arbitrary turing machine, is undecidable. That is, any problem which asks whether a turing machine will accept some given language, is undecidable, except for trivial cases. The trivial cases correspond to those languages which are either accepted by all turing machines or by none.


Implications to Proof Theory

A formal system is a system that allows us to prove things. It consists of certain statements that are believed to be true. These statements are called axioms. There can be at most a finite number of axioms in a formal system. A formal system also consists of a finite set of rules to construct more complicated statements from the axioms. Any statement that can be constructed from these rules and axioms is understood to be true. The (finite) sequence of rules and axioms used to derive a complicated statement is called as a proof of that statement.

Since any proof in a given formal system can be represented finitely, there can be at most a countable number of proofs in that formal system. A formal system must provide a method for determining whether a given proof is valid or not. That method must halt for all possible proofs. Therefore all proofs of the formal system can be enumerated by simply enumerating all possible strings and checking which are valid proofs. Hence the existance of a proof implies the existance of an algorithm that can find it.

We have shown in previous sections that there are many problems for which no algorithms exist to solve them. For each instance of any problem, the answer is either yes or no. There is one statement which says the answer is yes; another which says the answer is no. But no algorithm may exist that can determine the right answer for all instances. For such problems, there can be no proof of the answer to some of its instances. If there is some proof for all instances, then we can have an algorithm to find all these proofs and decide yes or no. This we know cannot be.

This means that for any formal system which is powerful enough to represent turing machines, there must be many true statements which can be represented in that formal system, but for which there may be no proof within that formal system. Each statement is either true or false. But no proof may exist in that formal system which can prove this.

Intuitively, if our thought processes follow some formal system, then it means that there are many things which are true, and which we can imagine, and we will never be able to prove those things to be true. And the number of such things is far greater than those which we can prove to be true. What a fate!! Note that this is not due to our limitations. It is due to the very nature of problems, algorithms, statements and proofs.


Something Wrong with Mathematics?!

We now know that we can write a procedure to enumerate all proofs of any given formal system. This is because the number of proofs in any formal system is countable. Now consider the following procedure -

	A( )
	begin
	    i = 1
	    while ( TRUE )
	    begin
		if ith valid proof shows that the procedure A( )
		    does not terminate,
		then terminate.
		i = i + 1
	    end
	end

Surely this procedure does not terminate, because if it does, then there is a valid formal proof showing that it does not terminate. However there is nothing stopping us from concluding that the procedure does not terminate.

This means that there is no proof in the given formal system to show that A( ) will not terminate. But we know that it will not terminate.

Also, we cannot use the technique of saying that the procedure A( ) cannot exist, because it is there, right in front of our eyes.

If our argument can be converted into a formal proof that A( ) will not terminate, then we have a contradiction - ``there is a formal proof to show that A( ) will not terminate'' versus ``there is no such formal proof''. This means that formal mathematics is inconsistent, which can't be.

If our argument cannot be converted into a formal proof, then it means that we can know something which formal mathematics cannot tell us. This means that formal mathematics is insufficient for reasoning.


References

  1. Introduction to Automata Theory, Languages, and Computation
    By John E. Hopcroft and Jeffrey D. Ullman
  2. Computation: Finite and Infinite Machines
    By Marvin Minsky


Contacting the Author

Comments/suggestions? Contact: vikram AT iiit.ac.in


Vikram Pudi