There are many other settings where a human controller has access to computer-generated candidate solutions. This section lists four important cases and concludes with a discussion of miscellaneous stuff.
In an anytime-setting the computer starts to work on a problem, and almost from the very first moment on candidate solutions (the best ones found so far) are shown on the monitor. Of course, the early outputs in such a process are often only preliminary and approximate solutions – without guarantee of optimality and far from perfect.
An example: Iterative deepening performs multiple depth-limited searches – gradually increasing the depth limit on each iteration of the search. Assume that the task is to seek good solutions in a large rooted tree . Let be the function which is to be maximised. Let be the set of all nodes in the tree at distance d from root.
1 Opt 2 3
DODetermine maximum Max of on 5
IFMax Opt 6
THENOpt Max 7
All the time the currently best solution (Opt) is shown on the monitor. The operator may stop at any moment.
Iterative deepening is not only interesting for HCI but has also many applications in fully automatic computing. A prominent example is game tree search: In tournament chess a program has a fixed amount of time for 40 moves, and iterative deepening is the key instrument to find a balanced distribution of time on the single alpha-beta searches.
Another frequent anytime scenario is repeated application of a heuristic. Let be some complicated function for which elements with large function values are searched. Let be a probabilistic heuristic that returns a candidate solution for this maximisation problem . For instance, may be local search or some other sort of hill-climbing procedure. is applied again and again in independent runs, and all the time the best solution found so far is shown.
A third anytime application is in Monte Carlo simulations, for instance in Monte Carlo integration. A static approach would take objective values at a prescribed number of random points (1,000 or so) and give the average value in the output. However, already the intermediate average values (after 1, 2, 3 etc. data points – or after each block of 10 or 50 points) may give early indications in which region the final result might fall and whether it really makes sense to execute all the many runs. Additional display of variances and frequencies of outliers gives further information for the decision when best to stop the Monte Carlo run.
In human-computer systems anytime algorithms help also in the following way: during the ongoing process of computing the human may already evaluate and compare preliminary candidate solutions.
Genetic Algorithms are search algorithms based on the mechanics of natural selection and natural genetics. Instead of single solutions whole populations of solutions are manipulated. Genetic Algorithms are often applied to large and difficult problems where traditional optimisation techniques fall short.
Interactive evolution is an evolutionary algorithm that needs human interaction. In interactive evolution, the user selects one or more individual(s) of the current population which survive(s) and reproduce(s) (with mutations) to constitute the new generation. So, in interactive evolution the user plays the role of an objective function and thus has a rather active role in the search process.
In fields like art, architecture, and photo processing (including the design of phantom photos) Generative Design is used as a special form of interactive evolution. In Generative Design all solutions of the current generation are shown simultaneously on the screen. Here typically “all” means some small number between 4 and 20. Think of photo processing as an example, where the user selects modified contrast, brightness, colour intensities, and sharpness. The user inspects the current candidate realizations, and by a single mouse click marks the one which he likes most. All other solutions are deleted, and mutants of the marked one are generated. The process is repeated (open end) until the user is happy with the outcome. For people without practical experience in generative design it may sound unbelievable, but even from poor starting solutions it takes the process often only a few iterations to come to acceptable outcomes.
Many problems are high-dimensional, having lots of parameters to adjust. If sets of good solutions in such a problem are generated by repeated probabilistic heuristics, the following interactive multi-stage procedure may be applied: First of all several heuristic solutions are generated and inspected by a human expert. This human especially looks for “typical” pattern in the solutions and “fixes” them. Then more heuristic solutions are generated under the side condition that they all contain the fixed parts. The human inspects again and fixes more parts. The process is repeated until finally everything is fix, resulting in one specific (and hopefully good) solution.
In multicriteria decision making not only one but two or more objective functions are given. The task is to find admissible solutions which are as good as possible with respect to all these objectives. Typically, the objectives are more or less contradictory, excluding the existence of a unanimous optimum. Helpful is the concept of “efficient solutions”, with the following definition: For an efficient solution there exists no other solution which is better with respect to at least one objective and not worse with respect to all the others.
A standard first step in multicriteria decision making is to compute the set of all efficient solutions. In the bicriteria case the “efficient frontier” can be visualized in a 2-dimensional diagram, giving the human controller a good overview of what is possible.
Graphical Visualisation of Computer Solutions
It is not enough that a computer generates good candidate solutions. The results also have to be visualized in appropriate ways. In case of a single solution important parts and features have to be highlighted. And, even more important, in case of concurring solutions their differences and specialities have to be stressed.
Permanent Computer Runs with Short Intermediate Human Control
A nickname for this is “1+23h mode”, coming from the following picture: Each day the human sits in front of the computer for one hour only. In this hour he looks at the computer results from the previous 23 hours, interacts with the machine and also briefs the computer what to do in the next 23 hours. So, the human invests only a small portion of his time while the computer is running permanently.
An impressive example comes from correspondence chess. Computer help is officially permitted. Most top players have one or several machines running all around the clock, analysing the most critical positions and lines of play. The human players collect these computer results and analyse only shortly per day.
Unexpected Errors and Numerical Instabilities
“Every software has errors!” This rule of thumb is often forgotten. People too often simply believe what the monitor or the description of a software product promises. However, running independent programs for the very same task (with a unique optimal solution) will result in different outputs unexpectedly often. Also numerical stability is not for free. Different programs for the same problem may lead to different results, due to rounding errors. Such problems may be recognised by applying independent programs.
Of course, also hardware has (physical) errors, especially in times of ongoing miniaturisation. So, in crucial situations it is a good strategy to run an identical program on fully independent machines - best of all operated by independent human operators.
23.3-1 For a Travelling Salesperson Problem with 200 random points in the unit square and Euclidean distances, generate 100 locally optimal solutions (with 2-exchanges, see Subsection 23.2.1) and count which edges occur how often in these 100 solutions. Define some threshold (for instance ) and fix all edges which are in at least of the solutions. Generate another 100 local optima, without allowing the fixed edges to be exchanged. Repeat until convergence and compare the final result with typical local optima from the first series.
In the technical report [ 293 ] lots of experiments on the penalty method for various sum type problems, dimensions, failure widths and probabilities are described and analysed. The proof of Theorem 23.3 was originally given in [ 13 ]). In e-commerce multiple-choice systems are often called “Recommender Systems” [ 285 ], having in mind customers for whom interesting products have to be listed. Understandably, commercial search engines and e-companies keep their shortlisting strategies secret.
A good class book on Genetic Algorithms is [ 134 ]. Interactive Evolution and Generative Design are described in [ 30 ]. There is a lot of literature on multicriteria decision making, one of the standard books being [ 123 ].
In the book [ 10 ] the story of 3-Hirn and its successes in tournament chess is told. The final match between “3-Hirn” and GM Yussupov is described in [ 12 ]. [ 11 ] gives more general information on improved game play by multiple computer hints. In [ 14 ] several good -best realizations of iterative deepening in game tree search are exhibited and discussed. Screenshots of these realizations can be inspected at http://www.minet.uni-jena.de/www/fakultaet/iam/personen/k-best.html. [ 161 ] describes the technical background of advanced programs for playing chess and other games.
There is a nice online repository, run by M. Zuker and D.H. Turner at http://www.bioinfo.rpi.edu/applications/mfold/. The user may enter for instance RNA-strings, and in realtime alternative foldings for these strings are generated. Amongst other data the user may enter parameters for “maximum number of computed foldings” (default = 50) and “percent suboptimality number” (default = 5 %).