Selected papers on planning

John Slaney and Sylvie Thiébaux and friends.



Planning with MIP

This is the abstract of the paper:

Sylvie Thiébaux, Carleton Coffrin, Hassan Hijazi and John Slaney.
Planning with MIP for Supply Restoration in Power Distribution Systems.
Procedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI 2013).

We present a simple and efficient algorithm to solve delete-free planning problems optimally and calculate the h+ heuristic. The algorithm efficiently computes a minimum-cost hitting set for a complete set of disjunctive action landmarks generated on the fly. Unlike other recent approaches, the landmarks it generates are guaranteed to be set-inclusion minimal. In almost all delete-relaxed IPC domains, this leads to a significant coverage and runtime improvement.



Minimal landmarks for optimal delete-free planning

This is the abstract of the paper:

Patrik Haslum, John Slaney and Sylvie Thiébaux.
Minimal landmarks for optimal delete-free planning.
Procedings of the International Conference on Automated Planning and Scheduling (ICAPS 2012): 253-257.

We present a simple and efficient algorithm to solve delete-free planning problems optimally and calculate the h+ heuristic. The algorithm efficiently computes a minimum-cost hitting set for a complete set of disjunctive action landmarks generated on the fly. Unlike other recent approaches, the landmarks it generates are guaranteed to be set-inclusion minimal. In almost all delete-relaxed IPC domains, this leads to a significant coverage and runtime improvement.



Decision Theoretic Planning with Non-Markovian Rewards

This is the abstract of the paper:

Sylvie Thiébaux, Charles Gretton, John Slaney, David Price and Froduald Kabanza.
Decision Theoretic Planning with Non-Markovian Rewards.
Journal of Artificial Intelligence Research 25 (2006): 17-74.

A decision process in which rewards depend on history rather than merely on the current state is called a decision process with non-Markovian rewards (NMRDP). In decision-theoretic planning, where many desirable behaviours are more naturally expressed as properties of execution sequences rather than as properties of states, NMRDPs form a more natural model than the commonly adopted fully Markovian decision process (MDP) model. While the more tractable solution methods developed for MDPs do not directly apply in the presence of non-Markovian rewards, a number of solution methods for NMRDPs have been proposed in the literature. These all exploit a compact specification of the non-Markovian reward function in temporal logic, to automatically translate the NMRDP into an equivalent MDP which is solved using efficient MDP solution methods. This paper presents NMRDPP (Non-Markovian Reward Decision Process Planner), a software platform for the development and experimentation of methods for decision-theoretic planning with non-Markovian rewards. The current version of NMRDPP implements, under a single interface, a family of methods based on existing as well as new approaches which we describe in detail. These include dynamic programming, heuristic search, and structured methods. Using NMRDPP, we compare the methods and identify certain problem features that affect their performance. NMRDPP's treatment of non-Markovian rewards is inspired by the treatment of domain-specific search control knowledge in the TLPlan planner, which it incorporates as a special case. In the First International Probabilistic Planning Competition, NMRDPP was able to compete and perform well in both the domain-independent and hand-coded tracks, using search control knowledge in the latter.



Decision Processes with Non-Markovian Rewards

This is the abstract of the paper:

Sylvie Thiébaux, Froduald Kabanza and John Slaney.
Anytime State-based Solution Methods for Decision Processes with Non-Markovian Rewards.
Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) 2002.

This paper addresses a central issue in solving a decision process with non-Markovian rewards (NMRDP): how to construct an equivalent Markov decision process (MDP) and solve it using state-based methods, trading off the computational cost of the construction against the size of the MDP and consequently the quality of the policy obtainable in reasonable time. Our main contribution is an approach which gives full control to anytime solution methods as to which parts of the MDP will be explicitly constructed, while avoiding a costly prior construction phase and resulting in an MDP of the minimal size achievable without stepping outside the anytime framework. This is enabled by extending Future Linear Temporal Logic to represent non-Markovian rewards, and by using model-checking in the construction.



Planning via Model Checking

This is the abstract of the paper:

Piergiorgio Bertoli, Alessandro Cimatti, John Slaney and Sylvie Thiébaux.
Solving Power Supply Restoration Problems with Planning via Symbolic Model-Checking.
Proceedings of the 15th European Conference on Artificial Intelligence (ECAI) 2002

The past few years have seen a flurry of new approaches for planning under uncertainty, but their applicability to real-world problems is yet to be established since they have been tested only on toy benchmark problems. To fill this gap, the challenge of solving power supply restoration problems with existing planning tools has recently been issued. This requires the ability to deal with incompletely specified initial conditions, fault conditions, unpredictable action effects, and partial observability in real-time. This paper reports a first response to this nontrivial challenge, using the approach of planning via symbolic model-checking as implemented in the MBP planner. We show how the problem can be encoded in MBP's input language, and report very promising experimental results on a number of significant test cases.



Blocks World

This is the abstract of the paper:

John Slaney and Sylvie Thiébaux.
Blocks World Revisited.
Artificial Intelligence 125 (2001): 119-153.

Contemporary AI shows a healthy trend away from artificial problems towards real-world applications. Less healthy, however, is the fashionable disparagement of "toy" domains: when properly approached, these domains can at the very least support meaningful systematic experiments, and allow features relevant to many kinds of reasoning to be abstracted and studied. A major reason why they have fallen into disrepute is that superficial understanding of them has resulted in poor experimental methodology and consequent failure to extract useful information. This paper presents a sustained investigation of one such toy: the (in)famous Blocks World planning problem, and provides the level of understanding required for its effective use as a benchmark. Our results include methods for generating random problems for systematic experimentation, the best domain-specific planning algorithms against which AI planners can be compared, and observations establishing the average plan quality of near-optimal methods. We also study the distribution of hard/easy instances, and identify the structure that AI planners must be able to exploit in order to approach Blocks World successfully.

MORE ABOUT BLOCKS WORLD



Blocks World Again

This is the abstract of the paper:

John Slaney and Sylvie Thiébaux.
Linear Time Near-Optimal Planning in the Blocks World.
Proceedings of AAAI-96 (1996): 1208-1214.

This paper reports an analysis of near-optimal Blocks World planning. Various methods are clarified, and their time complexity is shown to be linear in the number of blocks, which improves their known complexity bounds. The speed of the implemented programs (ten thousand blocks are handled in a second) enables us to make empirical observations on large problems. These suggest that the above methods have very close average performance ratios, and yield a rough upper bound on those ratios well below the worst case of 2. Further, they lead to the conjecture that in the limit the simplest linear time algorithm could be just as good on average as the optimal one.

GET YOUR BLOCKS WORLD PROBLEMS HERE!



Prof J K Slaney                    Phone (Aus.):  (026) 125 8607
Logic and Computation Group        Phone (Int.): +61 26 125 8607
School of Computer Science         Fax (Aus.):    (026) 125 8651
Australian National University     Fax (Int.):   +61 26 125 8651
Canberra, ACT, 0200, AUSTRALIA
John.Slaney@anu.edu.au