By Theorem 8.6, and are both NP-complete. Thus, if were in P, it would immediately follow that , which is considered unlikely. Thus, it is very unlikely that there is a polynomial-time deterministic algorithm for or . But what is the runtime of the best deterministic algorithms for them? And what about randomised algorithms? Let us focus on the problem in this section.

The “naive” deterministic algorithm for works as follows: Given a Boolean formula with variables, sequentially check the possible assignments to the variables of . Accept if the first satisfying assignment is found, otherwise reject. Obviously, this algorithm runs in time . Can this upper bound be improved?

Yes, it can. We will present an slightly better deterministic algorithm for that still runs in exponential time, namely in time , where the notation neglects polynomial factors as is common for exponential-time algorithms.

*Footnote.* The result presented here is not the best result known, but see Figure 8.7 for further improvements.

The point of such an improvement is that a algorithm, where is a constant, can deal with larger instances than the naive algorithm in the same amount of time before the exponential growth rate eventually hits and the running time becomes infeasible. For example, if then . Thus, this algorithm can deal with inputs twice as large as the naive algorithm in the same amount of time. Doubling the size of inputs that can be handled by some algorithm can be quite important in practice.

The deterministic algorithm for is based on a simple “*backtracking*” idea. Backtracking is useful for problems whose solutions consist of components each having more than one choice possibility. For example, a solution of is composed of the truth values of a satisfying assignment, and each such truth value can be either true (represented by ) or false (represented by ).

The idea is the following. Starting from the initially empty solution (i.e., the empty truth assignment), we seek to construct by recursive calls to our backtracking algorithm, step by step, a larger and larger partial solution until eventually a complete solution is found, if one exists. In the resulting recursion tree, the root is marked by the empty solution, and the leaves are marked with complete solutions of the problem.

*Footnote.* The inner vertices of the recursion tree represent the recursive calls of the algorithm, its root is the first call, and the algorithm terminates at the leaves without any further recursive call.

If the current branch in the recursion tree is “dead” (which means that the subtree underneath it cannot lead to a correct solution), one can prune this subtree and “backtracks” to pursue another extention of the partial solution constructed so far. This pruning may save time in the end.

*
Backtracking-SAT(
)
*

1( assigns truth values to all variables of ) 2`IF`

`THEN`

3`RETURN`

`ELSE`

( makes one of the clauses of false) 4`IF`

`THEN`

0 5 “dead branch” 6`RETURN`

`ELSE`

`IF`

7`Backtracking-SAT`

`THEN`

1 8`RETURN`

`ELSE`

`RETURN`

`Backtracking-SAT`

The input of algorithm *
Backtracking-SAT
* are a Boolean formula and a partial assignment to some of the variables of . This algorithm returns a Boolean value: 1, if the partial assignment can be extended to a satisfying assignment to all variables of , and 0 otherwise. Partial assignments are here considered to be strings of length at most over the alphabet .

The first call of the algorithm is *
Backtracking-SAT
*, where denotes the empty assignment. If it turns out that the partial assignment constructed so far makes one of the clauses of false, it cannot be extended to a satisfying assignment of . Thus, the subtree underneath the corresponding vertex in the recursion tree can be pruned; see also Exercise 8.3-1.

To estimate the runtime of *
Backtracking-SAT
*, note that this algorithm can be specified so as to select the variables in an “intelligent” order that minimises the number of steps needed to evaluate the variables in any clause. Consider an arbitrary, fixed clause of the given formula . Each satisfying assignment of assigns truth values to the three variables in . There are possibilities to assign a truth value to these variables, and one of them can be excluded certainly: the assignment that makes false. The corresponding vertex in the recursion tree of

`Backtracking-SAT `

Depending on the structure of , there may exist further “dead” branches whose subtrees can also be pruned. However, since we are trying to find an upper bound in the worst case, we do not consider these additional “dead” subtrees. It follows that

is an upper bound for *
Backtracking-SAT
* in the worst case. This bound slightly improves upon the trivial upper bound of the “naive” algorithm for .

As mentioned above, the deterministic time complexity of can be improved even further. For example, Monien and Speckenmeyer [ 186 ] proposed a divide-and-conquer algorithm with runtime . Dantsin et al. [ 59 ] designed a deterministic “local search with restart” algorithm whose runtime is , which was further improved by Brueggemann and Kern [ 34 ] in 2004 to a bound.

There are also randomised algorithms that have an even better runtime. One will be presented now, a “random-walk” algorithm that is due to Schöning [ 228 ].

A *random walk* can be done on a specific structure, such as in the Euclidean space, on an infinite grid or on a given graph. Here we are interested in random walks occurring on a graph that represents a certain stochastic automaton. To describe such automata, we first introduce the notion of a finite automaton.

A *finite automaton* can be represented by its transition graph, whose vertices are the states of the finite automaton, and the transitions between states are directed edges marked by the symbols of the alphabet . One designated vertex is the *initial state* in which the computation of the automaton starts. In each step of the computation, the automaton reads one input symbol (proceeding from left to right) and moves to the next state along the edge marked by the symbol read. Some vertices represent *final states*. If such a vertex is reached after the entire input has been read, the automaton accepts the input, and otherwise it rejects. In this way, a finite automaton accepts a set of input strings, which is called its language.

A *stochastic automaton*
is a finite automaton whose edges are marked by probabilities in addition. If the edge from to in the transition graph of is marked by , where , then moves from state to state with probability . The process of random transitions of a stochastic automaton is called a *Markov chain* in probability theory. Of course, the acceptance of strings by a stochastic automaton depends on the transition probabilities.

*
Random-SAT(
)
*

1`FOR`

is the number of variables in 2`TO`

randomly choose an assignment under the uniform distribution 3`DO`

`FOR`

4`TO`

5`IF`

`THEN`

the satisfying assignment to 6`RETURN`

choose a clause with 7 randomly choose a literal under the uniform distribution 8 determine the bit in assigning 9 swap to in 10`ELSE`

“ is not satisfiable”`RETURN`

Here, we are not interested in recognising languages by stochastic automata, but rather we will use them to describe a random walk by the randomised algorithm *
Random-SAT
*. Given a Boolean formula with variables,

`Random-SAT`

On input , *
Random-SAT
* starts by guessing a random initial assignment , where each bit of takes on the value and with probability . Suppose is satisfiable. Let be an arbitrary fixed assignment of . Let be a random variable that expresses the

*
Random-SAT
* now checks whether the initial assignment already satisfies , and if so, it accepts. Otherwise, if does not satisfy , there must exist a clause in not satisfied by .

`Random-SAT`

`Random-SAT`

Figure 8.6 shows a stochastic automaton , whose edges are not marked by symbols but only by transition probabilities. The computation of *
Random-SAT
* on input can be viewed as a random walk on as follows. Starting from the initial state , which will never be reached again later,

`Random-SAT `

`Random-SAT `

The fixed assignment satisfies , so it sets at least one literal in each clause of to true. If we fix *exactly* one of the literals satisfied by in each clause, say , then *
Random-SAT
* makes a step to the left if and only if was chosen by

`Random-SAT `

If the state is reached eventually after at most iterations of this process, and have Hamming distance , so satisfies and *
Random-SAT
* returns and halts accepting. Of course, one might also hit a satisfying assignment (distinct from ) in some state . But since this would only increase the acceptance probability, we neglect this possibility here.

If this process is repeated times unsuccessfully, then the initial assignment was chosen so badly that *
Random-SAT
* now dumps it, and restarts the above process from scratch with a new initial assignment. The entire procedure is repeated at most times, where . If it is still unsuccessful after trials,

`Random-SAT`

Since the probability of moving away from state to the right is larger than the probability of moving toward to the left, one might be tempted to think that the success probability of *
Random-SAT
* is very small. However, one should not underestimate the chance that one already after the initial step from reaches a state close to . The closer to the random walk starts, the higher is the probability of reaching by random steps to the left or to the right.

We only give a rough sketch of estimating the success probability (assuming that is satisfiable) and the runtime of *
Random-SAT
*. For convenience, suppose that 3 divides . Let be the probability for the event that

`Random-SAT `

Now, let be the probability for the event that *
Random-SAT
* reaches some state with the initial step. Clearly, we have

Finally, let be the probability for the event that *
Random-SAT
* reaches the final state within the inner loop. Of course, this event can occur also when starting from a state . Thus,

Approximating this sum by the entropy function and estimating the binomial coefficients from (8.2) and (8.3) in the single terms by Stirling's formula, we obtain the lower bound for .

To reduce the error probability, *
Random-SAT
* performs a total of independent trials, each starting with a new initial assignment . For each trial, the probability of success (i.e., the probability of finding a satisfying assignment of , if one exists) is at least , so the error is bounded by . Since the trials are independent, these error probabilities multiply, which gives an error of . Thus, the total probabilitiy of success of

`Random-SAT `

`Random-SAT `

The particular choice of this value of can be explained as follows. The runtime of a randomised algorithm, which performs independent trials such as *
Random-SAT
*, is roughly reciprocal to the success probability of one trial, . In particular, the error probability (i.e., the probability that that in none of the trials a satisfying assignment of is found even though is satisfiable) can be estimated by . If a fixed error of is to be not exceeded, it is enough to choose such that ; equivalently, such that . Up to constant factors, this can be accomplished by choosing . Hence, the runtime of the algorithm is in .

**Exercises**

8.3-1 Start the algorithm *
Backtracking-SAT
* for the Boolean formula and construct step by step a satisfying assignment of . How does the resulting recursion tree look like?