Search abd Phase Transitions

John Slaney, Sylvie Thiébaux and Toby Walsh

"Are We Nearly There Yet?"

This is the abstract of the paper:

Philip Kilby, John Slaney, Sylvie Thiébaux and Toby Walsh.
Estimating Search Tree Size.
Proceedings of the 21st National Conference of the American Association for Artificial Intelligence (AAAI-2006) : 1014-1019.

We propose two new online methods for estimating the size of a backtracking search tree. The first method is based on a weighted sample of the branches visited by chronological backtracking. The second is a recursive method based on assuming that the unexplored part of the search tree will be similar to the part we have so far explored. We compare these methods against an old method due to Knuth based on random probing. We show that these methods can reliably estimate the size of search trees explored by both optimization and decision procedures. We also demonstrate that these methods for estimating search tree size can be used to select the algorithm likely to perform best on a particular problem instance.

Backbones in the TSP

This is the abstract of the paper:

Philip Kilby, John Slaney and Toby Walsh.
The Backbone of the Trvelling Salesperson.
Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI-2005) : 175-180.

We study the backbone of the travelling salesperson optimization problem. We prove that it is intractable to approximate the backbone with any performance guarantee, assuming P ≠ NP and there is a limit on the number of edges falsely returned. Nevertheless, in practice, it appears that much of the backbone is present in close to optimal solutions. We can therefore often find much of the backbone using approximation methods based on good heuristics. We demonstrate that such backbone information can be used to guide the search for an optimal solution. However, the variance in runtimes when using a backbone guided heuristic is large. This suggests that we may need to combine such heuristics with randomization and restarts. In addition, though backbone guided heuristics are useful for finding optimal solutions, they are less help in proving optimality.

Backbones and Backdoors in SAT

This is the abstract of the paper:

Philip Kilby, John Slaney, Sylvie Thiébaux and Toby Walsh.
Backbones and Backdoors in Satisfiability.
Proceedings of the 20th National Conference of the American Association for Artificial Intelligence (AAAI-2005) : 1368-1373.

We study the backbone and the backdoors of propositional satisfiability problems. We make a number of theoretical, algorithmic and experimental contributions. From a theoretical perspective, we prove that backbones are hard even to approximate. From an algorithmic perspective, we present a number of different procedures for computing backdoors. From an empirical perspective, we study the correlation between being in the backbone and in a backdoor. Experiments show that there tends to be very little overlap between backbones and backdoors. We also study problem hardness for the Davis Putnam procedure. Problem hardness appears to be correlated with the size of strong backdoors, and weakly correlated with the size of the backbone, but does not appear to be correlated to the size of weak backdoors nor their number. Finally, to isolate the effect of backdoors, we look at problems with no backbone.

From Decision to Optimization

This is the introduction to the paper:

John Slaney and Toby Walsh.
Phase transition behavior: from decision to optimization.
Proceedings of the 5th International Symposium on the Theory and Applications of Satisfiability Testing (SAT 2002)

Combinatorial optimization problems (finding solutions that minimise a cost parameter) are closely related to the corresponding decision problems (deciding whether solutions within a given cost exist). Indeed, many algorithms for optimization essentially work by solving a sequence of decision problems. We might hope, therefore, that insights into decision problems gained by studying phase transition behavior could be useful in understanding similar behavior in optimization problems. For both theoretical and historical reasons, propositional satisfiability (or SAT) is the most intensively studied such decision problem. Accordingly, we begin our investigation with the simplest optimization versions of SAT: in the overconstrained case, MAXSAT and in the underconstrained case MAXONES. Given the insights gained from studying the 2+p-SAT decision problem, this paper look at optimization versions of random k+p-SAT.

Backbones in Optimization

This is the abstract of the paper:

John Slaney and Toby Walsh.
Backbones in Optimization and Approximation.
Proceedings of the 17th International Joint Conference on Artificial Intelligence (IJCAI-2001) : 254-259

We study the impact of backbones in optimization and approximation problems. We show that some optimization problems like graph coloring resemble decision problems, with problem hardness positively correlated with backbone size. For other optimization problems like blocks world planning and traveling salesperson problems, problem hardness is weakly and negatively correlated with backbone size, while the cost of finding optimal and approximate solutions is positively correlated with backbone size. A third class of optimization problems like number partitioning have regions of both types of behavior. We find that to observe the impact of backbone size on problem hardness, it is necessary to eliminate some symmetries, perform trivial reductions and factor out the effective problem size.

The Knife-Edge

This is the abstract of the paper:

John Slaney.
Is there a Constrainedness Knife-edge?
Proceedings of the European Conference on Artificial Intelligence (ECAI) (2000): 614-618.

Recent work on search has identified an intriguing feature dubbed the constrainedness knife-edge by Walsh (Proc. AAAI-98, 406-411) whereby overconstrained problems seem to become on average even more constrained as the search goes deeper, while underconstrained ones become less constrained. The present paper shows that while the knife-edge phenomenon itself is real, many of the claims that have been made about it are incorrect. It is not always associated with criticality, it is not a function of the problem so much as of the algorithm used to solve it, and it does not help to explain the peculiar hardness of problem instances near phase transitions. Despite the negative findings, the upshot is that Walsh's work has opened a fascinating line of research which will surely repay further investigation.

Hardness of Optimization

This is the abstract of the paper:

S. Thiébaux, J. Slaney and P. Kilby.
Estimating the Hardness of Optimisation.
Proceedings of the 14th European Conference on Artificial Intelligence (ECAI-2000) : 123-127.

Estimating conputational cost is a fundamental issue in time-bounded computation. We present a method for estimating the hardness of optimisation problems (find a minimal cost solution to instance I) by observing that of the corresponding decision problems (has I a solution of cost less than threshold T). Provided T is not too close to the optimal, decision is typically much easier than optimisation, yet knowing the hardness of the former is useful for predicting the latter. The present paper reports an experimental investigation of this idea, with encouraging results. An investment of a few percent of the work required for optimisation suffices for estimation within a small factor, even using a very simple implementation of the method.

Hardness of Optimization and Decision

This is the abstract of the paper:

John Slaney and Sylvie Thiébaux.
On the Hardness of Decision and Optimisation Problems.
Proceedings of the 13th European Conference on Artificial Intelligence (ECAI-98) : 244-248.

Recent work on phase transition has detected apparently interesting phenomena in the distribution of hard optimisation problems (find, on some measure, the least m such that the given instance x has a solution of value m) and their corresponding decision problems (determine, for a given bound m, whether or not x has a solution of value at most m). This paper examines the relationship between the hardness of optimisation and that of decision. We identify an expression for the latter in terms of the former together with the distance between the bound and the optimal solution size. These results explain both the appeal and the shortcomings of other accounts in the literature. We validate our analysis by showing that it predicts the hardness of Blocks World decision problems with much improved accuracy.

	      Dr J K Slaney                       Phone (Aus.):  (026) 125 8607
	      Theory Group                        Phone (Int.): +61 26 125 8607
	      Research School of Computer Science Fax (Aus.):    (026) 125 8651
	      Australian National University      Fax (Int.):   +61 26 125 8651
	      Canberra, ACT, 0200, AUSTRALIA