Springer, 2010. — 210 p. — ISBN 978-3-642-16532-0.
For a long time computer scientists have distinguished between fast and slow algorithms. Fast (or good) algorithms are the algorithms that run in polynomial time, which means that the number of steps required for the algorithm to solve a problem is bounded by some polynomial in the length of the input. All other algorithms are slow (or bad). The running time of slow algorithms is usually exponential. This book is about bad algorithms.
There are several reasons why we are interested in exponential time algorithms. Most of us believe that there are many natural problems which cannot be solved by polynomial time algorithms. The most famous and oldest family of hard problems is the family of NP-complete problems. Most likely there are no polynomial time algorithms solving these hard problems and in the worst-case scenario the exponential running time is unavoidable.
Every combinatorial problem is solvable in finite time by enumerating all possible solutions, i.e. by brute-force search. But is brute-force search always unavoidable? Definitely not. Already in the nineteen sixties and seventies it was known that some NP-complete problems can be solved significantly faster than by brute-force search. Three classic examples are the following algorithms for the TRAVELLING SALESMAN problem, MAXIMUM INDEPENDENT SET, and COLORING. The algorithm of Bellman and Held and Karp from 1962 solves the TRAVELLING SALESMAN problem with n cities in time O(2
nn
2), which is much faster than the trivial O(n!n) brute-force search. In 1977, Tarjan and Trojanowski gave an O(2
n/3) time algorithm computing a maximum independent set in a graph on n vertices, improving on the brute-force search algorithm which takes O(n2
n). In 1976, Lawler constructed an O(n(1+3
1/3)
n) time algorithm solving the COLORING problem for which brute-force solution runs in time O(n
n+1). On the other hand, for some NP-complete problems, like SATISFIABILITY, regardless of all developments in algorithmic techniques in the last 50 years, we know of no better algorithm than the trivial brute-force search that tries all possible solutions. It is a great intellectual challenge to find whether enumeration of solutions is the only approach to solve NP problems in general.
With the development of the area of exact algorithms, it has become possible to improve significantly the running time of the classical algorithms for MAXIMUM INDEPENDENT SET and GRAPH COLORING. Moreover, very often the techniques for solving NP problems can be used to solve problems that are presumably harder than NP-complete problems, like #P and PSPACE-complete problems. On the other hand, for some problems, like the TRAVELLING SALESMAN problem, no progress has been made in the last 50 years. To find an explanation of the wide variation in the worst-case complexities of known exact algorithms is another challenge. Intellectual curiosity is not the only reason for the study of exponential algorithms. There are certain applications that require exact solutions of NP-hard problems, although this might only be possible for moderate input sizes. And in some situations the preference of polynomial time over exponential is debatable. Richard Lipton in his blog Gödel’s Lost Letter and P≠NP, attributes the following saying to Alan Perlis, the first Turing Award winner, for every polynomial-time algorithm you have, there is an exponential algorithm that I would rather run. The point is simple: n
3>1.0941
n n for n≤100 and on instances of moderate sizes, the exponential time algorithm can be preferable to polynomial time algorithms. And a reduction of the base of the exponential running time, say from O(1.8
n) to O(1.7
n), increases the size of the instances solvable within a given amount of time by a constant multiplicative factor; running a given exponential algorithm on a faster computer can enlarge the size only by a (small) additive factor. Thus bad exponential algorithms are not that bad in many situations!
And the final reason of our interest in exact algorithms is of course that the design and analysis of exact algorithms leads to a better understanding of NP-hard problems and initiates interesting new combinatorial and algorithmic challenges.
Our interest in exact algorithms was attracted by an amazing survey by Gerhard Woeginger. This influential survey fascinated many researchers and, we think, was one of the main reasons why the area of exact algorithms changed drastically in the last decade. Still being in a nascent stage, the study of exact algorithms is now a dynamic and vibrant area. While there are many open problems, and new techniques to solve these problems are still appearing, we believe it is the right moment to summarize the work on exact algorithms in a book. The main intention of this book is to provide an introduction to the area and explain the most common algorithmic techniques. We have tried to make the results accessible not only to the specialists working in the area but to a more general audience of students and researchers in Computer Science, Operations Research, Optimization, Combinatorics, and other fields related to algorithmic solutions of hard optimization problems. Therefore, our preferences in presenting the material were for giving the basic ideas, avoiding improvements which require significant technical effort.
Branching
Dynamic Programming
Inclusion-Exclusion
Treewidth
Feedback Vertex Set
Subset Convolution
Local Search and SAT
Split and List
Time Versus Space
Miscellaneous
Conclusions, Open Problems and Further Directions