References

1
Boutilier, C., Dean, T., & Hanks, S. (1999).
Decision-theoretic planning: Structural assumptions and computational leverage.
Journal of Artificial Intelligence Research, 11, 1–94.

2
Boutilier, C., Dearden, R., & Goldszmidt, M. (1995).
Exploiting structure in policy construction.
In Mellish, C. S. (Ed.), Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, pp. 1104–1111, Montreal, Canada.
Morgan Kaufmann Publishers.

3
Boutilier, C., Friedman, N., Goldszmidt, M., & Koller, D. (1996).
Context-specific independence in Bayesian networks.
In Proceedings of the Twelfth Annual Conference on Uncertainty in Artificial Intelligence (UAI 96), pp. 115–123, Portland, OR.

4
Dean, T., & Kanazawa, K. (1989).
A model for reasoning about persistence and causation.
Computational Intelligence, 5(3), 142–150.

5
Dearden, R., & Boutilier, C. (1997).
Abstraction and approximate decision-theoretic planning.
Artificial Intelligence, 89(1–2), 219–283.

6
Fox, M., & Long, D. (2003).
PDDL2.1: An extension to PDDL for expressing temporal planning domains.
Journal of Artificial Intelligence Research, 20, 61–124.

7
Guestrin, C., Koller, D., Parr, R., & Venkataraman, S. (2003).
Efficient solution algorithms for factored MDPs.
Journal of Artificial Intelligence Research, 19, 399–468.

8
Hoey, J., St-Aubin, R., Hu, A., & Boutilier, C. (1999).
SPUDD: Stochastic planning using decision diagrams.
In Laskey, K. B., & Prade, H. (Eds.), Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, pp. 279–288, Stockholm, Sweden.
Morgan Kaufmann Publishers.

9
Howard, R. A. (1960).
Dynamic Programming and Markov Processes.
John Wiley & Sons, New York, NY.

10
Howard, R. A. (1971).
Dynamic Probabilistic Systems, Vol. I: Markov Models.
John Wiley & Sons, New York, NY.

11
Kushmerick, N., Hanks, S., & Weld, D. S. (1995).
An algorithm for probabilistic planning.
Artificial Intelligence, 76(1–2), 239–286.

12
Littman, M. L. (1997).
Probabilistic propositional planning: Representations and complexity.
In Proceedings of the Fourteenth National Conference on Artificial Intelligence, pp. 748–754, Providence, RI.
American Association for Artificial Intelligence, AAAI Press.

13
Littman, M. L., Goldsmith, J., & Mundhenk, M. (1998).
The computational complexity of probabilistic planning.
Journal of Artificial Intelligence Research, 9, 1–36.

14
McDermott, D. (2000).
The 1998 AI planning systems competition.
AI Magazine, 21(2), 35–55.

15
Puterman, M. L. (1994).
Markov Decision Processes: Discrete Stochastic Dynamic Programming.
John Wiley & Sons, New York, NY.

16
Rintanen, J. (2003).
Expressive equivalence of formalisms for planning with sensing.
In Giunchiglia, E., Muscettola, N., & Nau, D. S. (Eds.), Proceedings of the Thirteenth International Conference on Automated Planning and Scheduling, pp. 185–194, Trento, Italy.
AAAI Press.

17
Yoon, S., Fern, A., & Givan, R. (2005).
Learning measures of progress for planning domains.
In Proceedings of the Twentieth National Conference on Artificial Intelligence, pp. 1217–1222.

18
Younes, H. L. S., & Littman, M. L. (2004).
PPDDL1.0: An extension to PDDL for expressing planning domains with probabilistic effects.
Tech. rep. CMU-CS-04-167, Carnegie Mellon University, Pittsburgh, PA.



Håkan L. S. Younes
2005-12-06