In Chapter 7, efficient algorithms were introduced that are important for cryptographic protocols. Designing efficient algorithms of course is a central task in all areas of computer science. Unfortunately, many important problems have resisted all attempts in the past to devise efficient algorithms solving them. Well-known examples of such problems are the satisfiability problem for boolean formulas and the graph isomorphism problem.

One of the most important tasks in complexity theory is to classify such problems according to their computational complexity. Complexity theory and algorithmics are the two sides of the same medal; they complement each other in the following sense. While in algorithmics one seeks to find the best *upper bound* for some problem, an important goal of complexity theory is to obtain the best possible *lower bound* for the same problem. If the upper and the lower bound coincide, the problem has been classified.

The proof that some problem cannot be solved efficiently often appears to be “negative” and not desirable. After all, we wish to solve our problems and we wish to solve them fast. However, there is also some “positive” aspect of proving lower bounds that, in particular, is relevant in cryptography (see Chapter 7). Here, we are interested in the applications of inefficiency: A proof that certain problems (such as the factoring problem or the discrete logarithm) cannot be solved efficiently can support the security of some cryptosystems that are important in practical applications.

In Section 8.1, we provide the foundations of complexity theory. In particular, the central complexity classes P and NP are defined. The question of whether or not these two classes are equal is still open after more than three decades of intense research. Up to now, neither a proof of the inequality (which is widely believed) could be achieved, nor were we able to prove the equality of P and NP. This question led to the development of the beautiful and useful theory of NP-completeness.

One of the best understood NP-complete problems is , the satisfiability problem of propositional logic: Given a boolean formula , does there exist a satisfying assignment for , i.e., does there exist an assignment of truth values to 's variables that makes true? Due to its NP-completeness, it is very unlikely that there exist efficient deterministic algorithms for . In Section 8.3, we present a deterministic and a randomised algorithm for that both run in exponential time. Even though these algorithms are *asymptotically inefficient* (which is to say that they are useless in practice for *large* inputs), they are useful for sufficiently small inputs of sizes still relevant in practice. That is, they outperform the naive deterministic exponential-time algorithm for in that they considerably increase the input size for which the algorithm's running time still is tolerable.

In Section 8.4, we come back to the graph isomorphism problem, which was introduced in Section 7.1.3 (see Definition 7.8) and which was useful in Section 7.5.2 with regard to the zero-knowledge protocols. This problem is one of the few natural problems in NP, which (under the plausible assumption that ) may be neither efficiently solvable nor be NP-complete. In this regard, this problem is special among the problems in NP. Evidence for the hypothesis that the graph isomorphism problem may be neither in P nor NP-complete comes from the theory of *lowness*, which is introduced in Section 8.4. In particular, we present Schöning's result that is contained in the low hierarchy within NP. This result provides strong evidence that is not NP-complete. We also show that is contained in the complexity class SPP and thus is low for certain probabilistic complexity classes. Informally put, a set is *low* for a complexity class if it does not provide any useful information when used as an “oracle” in computations. For proving the lowness of , certain group-theoretic algorithms are useful.

As mentioned above, complexity theory is concerned with proving lower bounds. The difficulty in such proofs is that it is not enough to analyse the runtime of just *one* concrete algorithm for the problem considered. Rather, one needs to show that *every* algorithm solving the problem has a runtime no better than the lower bound to be proven. This includes also algorithms that have not been found as yet. Hence, it is necessary to give a formal and mathematically precise definition of the notion of algorithm.

Since the 1930s, a large variety of formal algorithm models has been proposed. All these models are equivalent in the sense that each such model can be transformed (via an algorithmic procedure) into any other such model. Loosely speaking, one might consider this transformation as some sort of compilation between distinct programming languages. The equivalence of all these algorithm models justifies Church's thesis, which says that each such model captures precisely the somewhat vague notion of “intuitive computability”. The algorithm model that is most common in complexity theory is the Turing machine, which was introduced in 1936 by Alan Turing (1912 until 1954) in his pathbreaking work [ 259 ]. The Turing machine is a very simple abstract model of a computer. In what follows, we describe this model by defining its syntax and its semantics, and introduce at the same time two basic computation paradigms: determinism and nondeterminism. It makes sense to first define the more general model of nondeterministic Turing machines. Deterministic Turing machines then are easily seen to be a special case.

First, we give some technical details and describe how Turing machines work. A Turing machine has infinite work tapes that are subdivided into cells. Every cell may contain one letter of the work alphabet. If a cell does not contain a letter, we indicate this by a special blank symbol, denoted by . The computation is done on the work tapes. Initially, a designated input tape contains the input string, and all other cells contain the blank. If a computation halts, its result is the string contained in the designated output tape.

*Footnote.* One can require, for example, that the input tape is a read-only and the output tape is a write-only tape. Similarly, one can specify a variety of further variations of the technical details, but we do not pursue this any further here.

To each tape belongs a head that accesses exactly one cell of this tape. In each step of the computation, the head can change the symbol currently read and then moves to the left or to the right or does not move at all. At the same time, the current state of the machine, which is stored in its “finite control” can change. Figure 8.1 displays a Turing machine with one input and two work tapes.

**Definition 8.1 **
**(Syntax of Turing machines)**
*A nondeterministic Turing machine with tapes (a -tape NTM, for short) is a -tuple , where is the input alphabet, is the work alphabet, is a finite set of states disjoint with , is the transition function, is the initial state, is the blank symbol, and is the set of final states. Here, denotes the power set of set , i.e., the set of all subsets of .*

*For readability, we write instead of with , and . This transition has the following meaning. If the current state is and the head currently reads a symbol , then: *

*
is replaced by , *

*
is the new state, and *

*
the head moves according to , i.e., the head either moves one cell to the left (if ), or one cell to the right (if ), or it does not move at all (if ).*

*The special case of a deterministic Turing machine with
tapes (-tape DTM, for short) is obtained by requiring that the transition function maps from to .*

For , we obtain the one-tape Turing machine, abbreviated simply by NTM and DTM, respectively. Every -tape NTM or -tape DTM can be simulated by a Turing machine with only one tape, where the runtime at most doubles. Turing machines can be considered both as acceptors (which accept languages) and as transducers (which compute functions).

**Definition 8.2 **
**(Semantics of Turing machines)**
*Let be an {NTM}. A configuration of
is a string , where is interpreted as follows: is the current tape inscription, the head reads the first symbol of , and is the current state of . On the set of all configurations of , define a binary relation , which describes the transition from a configuration into a configuration according to . For any two strings and in , where and , and for all , define*

*Two special cases need be considered separately: *

*1. If and (i.e., 's head moves to the right and reads a symbol), then . *

*2. If and (i.e., 's head moves to the left and reads a symbol), then .*

*The initial configuration of
on input
is always . The final configurations of
on input
have the form with and . Let be the reflexive, transitive closure of : For , we have if and only if there is a finite sequence of configurations in such that*

*where possibly . If is the initial configuration of on input , then this sequence of configurations is a finite computation of
, and we say
halts on input
. The language accepted by
is defined by*

For NTMs, any configuration may be followed by more than one configuration. Thus, they have a *computation tree*, whose root is labelled by the initial configuration and whose leaves are labelled by the final configurations. Note that trees are special graphs (recall Definition 7.8 in Section 7.1.3 and Problem 7-2), so they have vertices and edges. The vertices of a computation tree are the configurations of on input . For any two configurations and from , there is exactly one directed edge from to if and only if . A path in the computation tree of is a sequence of configurations . The computation tree of an NTM can have infinite paths on which never a halting configuration is reached. For DTMs, each non-halting configuration has a unique successor configuration. Thus, the computation tree of a DTM degenerates to a linear chain.

**Example 8.1 **Consider the language . A Turing machine accepting is given by

where the transitions of are stated in Figure 8.2. Figure 8.3 provides the meaning of the single states of and the intention corresponding to the each state. See also Exercise 8.1-2.

In order to classify problems according to their computational complexity, we need to define complexity classes. Each such class is defined by a given resource function and contains all problems that can be solved by a Turing machine that requires no more of a resource (e.g., computation time or memory space) than is specified by the resource function. We consider only the resource time here, i.e., the number of steps—as a function of the input size—needed to solve (or to accept) the problem. Further, we consider only the traditional *worst-case* complexity model. That is, among all inputs of size , those that require the maximum resource are decisive; one thus assumes the worst case to occur. We now define deterministic and nondeterministic time complexity classes.

**Definition 8.3 **
**(Deterministic and nondeterministic time complexity)**

*
Let be a DTM with and let be an input. Define the time function of
, which maps from to , as follows:*

*Define the function by:*

*Let be an NTM with and let be an input. Define the time function of
, which maps from to , as follows:*

*Define the function by*

*Let be a computable function that maps from to . Define the deterministic and nondeterministic complexity classes with time function
by*

*Let be the set of all polynomials. Define the complexity classes P and NP as follows: *

Why are the classes P and NP so important? Obviously, exponential-time algorithms cannot be considered efficient in general. Garey and Johnson compare the rates of growth of some particular polynomial and exponential time functions for certain input sizes relevant in practice, see Figure 8.4. They assume that a computer executes one million operations per second. Then all algorithms bounded by a polynomial run in a “reasonable” time for inputs of size up to , whereas for example an algorithm with time bound takes more than years already for the modest input size of . For it takes almost centuries, and for a truly astronomic amount of time.

The last decades have seen an impressive development of computer and hardware technology. Figure 8.5 (taken from [
81
]) shows that this is not enough to provide an essentially better runtime behaviour for exponential-time algorithms, even assuming that the previous trend in hardware development continues. What would happen if one had a computer that is times or even times as fast as current computers are? For functions , , let be the maximum size of inputs that can be solved by a time-bounded algorithm within one hour. Figure 8.5 also taken from [
81
]) shows that a computer times faster than today's computers increases for by only an additive value close to . In contrast, using computers with the same increase in speed, an time-bounded algorithm can handle problem instances about four times as large as before.

Intuitively, the complexity class P contains the efficiently solvable problems. The complexity class NP contains many problems important in practice but currently not efficiently solvable. Examples are the satisfiability problem and the graph isomorphism problem that will be dealt with in more detail later in this chapter. The question of whether or not the classes P and NP are equal is still open. This famous P-versus-NP question gave rise to the theory of NP-completeness, which is briefly introduced in Section 8.2.

**Exercises**

8.1-1 Can Church's thesis ever be proven formally?

8.1-2 Consider the Turing machine in Example 8.1.

*a.* What are the sequences of configurations of for inputs and , respectively?

*b.* Prove that is correct, i.e., show that .

*c.* Estimate the running time of .

*d.* Show that the graph isomorphism problem and the graph automorphism problem introduced in Definition 7.8 are both in NP.