next up previous
Next: About this document ... Up: Integrative Windowing Previous: Acknowledgements

References

1
Kamal M. Ali and Michael J. Pazzani.
HYDRA: A noise-tolerant relational concept learning algorithm.
In R. Bajcsy, editor, Proceedings of the 13th Joint International Conference on Artificial Intelligence (IJCAI-93), pages 1064-1071, Chambèry, France, 1993. Morgan Kaufmann.

2
Avrim L. Blum and Pat Langley.
Selection of relevant features and examples in machine learning.
Artificial Intelligence, 97:245-271, 1997. Special Issue on Relevance.

3
Avrim L. Blum and Tom Mitchell.
Combining labeled and unlabeled data with co-training.
In Proceedings of the 11th Annual Conference on Computational Learning Theory (COLT-98), Madison, WI, 1998. ACM Inc.

4
Leo Breiman.
Arcing classifiers.
Technical Report 460, Statistics Department, University of California at Berkeley, July 1996.

5
Leo Breiman.
Bagging predictors.
Machine Learning, 24(2):123-140, 1996.

6
Leo Breiman.
Pasting bites together for prediction in large data sets and on-line.
Unpublished manuscript, 1996.

7
Leo Breiman, Jerome H. Friedman, Richard Olshen, and Charles Stone.
Classification and Regression Trees.
Wadsworth & Brooks, Pacific Grove, CA, 1984.

8
Rich Caruana and Dayne Freitag.
Greedy attribute selection.
In W.W. Cohen and H. Hirsh, editors, Proceedings of the 11th International Conference on Machine Learning (ML-94), pages 28-36, New Brunswick, NJ, 1994. Morgan Kaufmann.

9
Jason Catlett.
Megainduction: A test flight.
In L.A. Birnbaum and G.C. Collins, editors, Proceedings of the 8th International Workshop on Machine Learning (ML-91), pages 596-599, Evanston, IL, 1991. Morgan Kaufmann.

10
Jason Catlett.
Megainduction: Machine Learning on Very Large Databases.
PhD thesis, Basser Department of Computer Science, University of Sydney, 1991.

11
Jason Catlett.
Peepholing: Choosing attributes efficiently for megainduction.
In Proceedings of the 9th International Conference on Machine Learning (ML-91), pages 49-54. Morgan Kaufmann, 1992.

12
Peter Clark and Robin A. Boswell.
Rule induction with CN2: Some recent improvements.
In Proceedings of the 5th European Working Session on Learning (EWSL-91), pages 151-163, Porto, Portugal, 1991. Springer-Verlag.

13
Peter Clark and Tim Niblett.
The CN2 induction algorithm.
Machine Learning, 3(4):261-283, 1989.

14
William W. Cohen.
Fast effective rule induction.
In A. Prieditis and S. Russell, editors, Proceedings of the 12th International Conference on Machine Learning (ML-95), pages 115-123, Lake Tahoe, CA, 1995. Morgan Kaufmann.

15
David A. Cohn, Les Atlas, and Richard E. Ladner.
Improving generalization with active learning.
Machine Learning, 15(2):201-221, 1994.

16
Ido Dagan and Sean P. Engelson.
Committee-based sampling for training probabilistic classifiers.
In A. Prieditis and S. Russell, editors, Proceedings of the 12th International Conference on Machine Learning (ML-95), pages 150-157. Morgan Kaufmann, 1995.

17
Luc De Raedt and Maurice Bruynooghe.
Indirect relevance and bias in inductive concept learning.
Knowledge Acquisition, 2:365-390, 1990.

18
Marie desJardins and Diana F. Gordon.
Special issue on bias evaluation and selection.
Machine Learning, 20(1-2), 1995.

19
Thomas G. Dietterich and Ghulum Bakiri.
Solving multiclass learning problems via error-correcting output codes.
Journal of Artificial Intelligence Research, 2:263-286, 1995.

20
Pedro Domingos.
Efficient specific-to-general rule induction.
In E. Simoudis and J. Han, editors, Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD-96), pages 319-322. AAAI Press, 1996.

21
Pedro Domingos.
Using partitioning to speed up specific-to-general rule induction.
In Proceedings of the AAAI-96 Workshop on Integrating Multiple Learned Models, pages 29-34, 1996.

22
Harris Drucker, Robert E. Schapire, and Patrice Simard.
Boosting performance in neural networks.
International Journal of Pattern Recognition and Artificial Intelligence, 7(4):705-720, 1993.

23
Usama M. Fayyad and Keki B. Irani.
On the handling of continuous-valued attributes in decision tree generation.
Machine Learning, 8(1), 1992.

24
Yoav Freund and Robert E. Schapire.
Experiments with a new boosting algorithm.
In L. Saitta, editor, Proceedings of the 13th International Conference on Machine Learning, pages 148-156, Bari, Italy, 1996. Morgan Kaufmann.

25
Yoav Freund, H. Sebastian Seung, Eli Shamir, and Naftali Tishby.
Selective sampling using the query by committee algorithm.
Machine Learning, 28:133-168, 1997.

26
Johannes Fürnkranz.
Dimensionality reduction in ILP: A call to arms.
In L. De Raedt and S. Muggleton, editors, Proceedings of the IJCAI-97 Workshop on Frontiers of Inductive Logic Programming, pages 81-86, Nagoya, Japan, 1997.

27
Johannes Fürnkranz.
Knowledge discovery in chess databases: A research proposal.
Technical Report OEFAI-TR-97-33, Austrian Research Institute for Artificial Intelligence, 1997.

28
Johannes Fürnkranz.
More efficient windowing.
In Proceedings of the 14th National Conference on Artificial Intelligence (AAAI-97), pages 509-514, Providence, RI, 1997. AAAI Press.

29
Johannes Fürnkranz.
Noise-tolerant windowing.
In Proceedings of the 15th International Joint Conference on Artificial Intelligence (IJCAI-97), pages 852-857, Nagoya, Japan, 1997. Morgan Kaufmann.

30
Johannes Fürnkranz.
Pruning algorithms for rule learning.
Machine Learning, 27(2):139-171, 1997.

31
Johannes Fürnkranz.
Separate-and-conquer rule learning.
Artificial Intelligence Review, 1998. In press.

32
Johannes Fürnkranz and Gerhard Widmer.
Incremental Reduced Error Pruning.
In W. Cohen and H. Hirsh, editors, Proceedings of the 11th International Conference on Machine Learning (ML-94), pages 70-77, New Brunswick, NJ, 1994. Morgan Kaufmann.

33
Dragan Gamberger and Nada Lavrac.
Conditions for Occam's razor applicability and noise elimination.
In M. van Someren and G. Widmer, editors, Proceedings of the 9th European Conference on Machine Learning (ECML-97), pages 108-123, Prague, Czech Republic, 1997. Springer-Verlag.

34
Robert C. Holte, Liane E. Acker, and Bruce W. Porter.
Concept learning and the problem of small disjuncts.
In Proceedings of the 11th International Joint Conference on Artificial Intelligence (IJCAI-89), pages 813-818, Detroit, MI, 1989. Morgan Kaufmann.

35
George H. John and Pat Langley.
Static versus dynamic sampling for data mining.
In E. Simoudis and J. Han, editors, Proceedings of the 2nd International Conference on Knowledge Discovery and Data Mining (KDD-96), pages 367-370. AAAI Press, 1996.

36
Jyrki Kivinen and Heikki Mannila.
The power of sampling in knowledge discovery.
In Proceedings of the 13th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems (PODS-94), pages 77-85, 1994.

37
Jyrki Kivinen and Heikki Mannila.
Approximate dependency inference from relations.
Theoretical Computer Science, 149(1):129-149, 1995.

38
Ron Kohavi and George H. John.
Automatic parameter selection by minimizing estimated error.
In A. Prieditis and S. Russell, editors, Proceedings of the 12th International Conference on Machine Learning (ICML-95), pages 304-312. Morgan Kaufmann, 1995.

39
Ron Kohavi and George H. John.
Wrappers for feature subset selection.
Artificial Intelligence, 97(1-2):273-324, 1997. Special Issue on Relevance.

40
David D. Lewis and Jason Catlett.
Heterogeneous uncertainty sampling for supervised learning.
In Proceedings of the 11th International Conference on Machine Learning (ML-94), pages 148-156, New Brunswick, NJ, 1994. Morgan Kaufmann.

41
David D. Lewis and William Gale.
Training text classifiers by uncertainty sampling.
In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR-94), pages 3-12, 1994.

42
Ray Liere and Prasad Tadepalli.
Active learning with committees for text categorization.
In Proceedings of the 14th National Conference on Artificial Intelligence (AAAI-97), pages 591-597, Providence, RI, 1997. AAAI Press.

43
Andrew McCallum and Kamal Nigam.
Employing EM in pool-based active learning for text classification.
In Proceedings of the 15th International Conference on Machine Learning (ICML-98), Madison, WI, 1998. Morgan Kaufmann.

44
Ryszard S. Michalski.
On the quasi-minimal solution of the covering problem.
In Proceedings of the 5th International Symposium on Information Processing (FCIP-69), volume A3 (Switching Circuits), pages 125-128, Bled, Yugoslavia, 1969.

45
Martin Møller.
Supervised learning on large redundant training sets.
International Journal of Neural Systems, 4(1):15-25, 1993.

46
Stephen H. Muggleton, Michael Bain, Jean Hayes-Michie, and Donald Michie.
An experimental comparison of human and machine learning formalisms.
In Proceedings of the 6th International Workshop on Machine Learning (ML-89), pages 113-118. Morgan Kaufmann, 1989.

47
Stephen H. Muggleton.
Inverse entailment and Progol.
New Generation Computing, 13(3,4):245-286, 1995. Special Issue on Inductive Logic Programming.

48
Kamal Nigam, Andrew McCallum, Sebastian Thrun, and Tom Mitchell.
Learning to classify from labeled and unlabeled documents.
In Proceedings of the 15th National Conference on Artificial Intelligence (AAAI-98), Madison, WI, 1998. AAAI Press.

49
Bernhard Pfahringer.
Practical Uses of the Minimum Description Length Principle in Inductive Learning.
PhD thesis, Technische Universität Wien, 1995.

50
J. Ross Quinlan.
Discovering rules by induction from large collections of examples.
In D. Michie, editor, Expert Systems in the Micro Electronic Age, pages 168-201. Edinburgh University Press, 1979.

51
J. Ross Quinlan.
Learning efficient classification procedures and their application to chess end games.
In R.S. Michalski, J.G. Carbonell, and T. Mitchell, editors, Machine Learning. An Artificial Intelligence Approach, pages 463-482. Tioga, Palo Alto, CA, 1983.

52
J. Ross Quinlan.
Learning logical definitions from relations.
Machine Learning, 5:239-266, 1990.

53
J. Ross Quinlan.
C4.5: Programs for Machine Learning.
Morgan Kaufmann, San Mateo, CA, 1993.

54
Michèle Sebag and Céline Rouveirol.
Tractable induction and classification in first order logic via stochastic matching.
In Proceedings of the 15th International Joint Conference on Artificial Intelligence (IJCAI'97), pages 888-893, Nagoya, Japan, 1997.

55
H. Sebastian Seung, Manfred Opper, and Haim Sompolinsky.
Query by committee.
In Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory (COLT-92), pages 287-294, 1992.

56
Hannu Toivonen.
Sampling large databases for association rules.
In Proceedings of the 22nd Conference on Very Large Data Bases (VLDB-96), pages 134-145, Mumbai, India, 1996.

57
Peter Turney.
How to shift bias: Lessons from the Baldwin effect.
Evolutionary Computation, 4(3):271-295, 1996.

58
Paul E. Utgoff.
Shift of bias for inductive concept learning.
In R.S. Michalski, J. Carbonell, and T. Mitchell, editors, Machine Learning: An Artificial Intelligence Approach, Vol. II, pages 107-148. Morgan Kaufmann, Los Altos, CA, 1986.

59
Jarryl Wirth and Jason Catlett.
Experiments on the costs and benefits of windowing in ID3.
In J. Laird, editor, Proceedings of the 5th International Conference on Machine Learning (ML-88), pages 87-99, Ann Arbor, MI, 1988. Morgan Kaufmann.

60
Yiming Yang.
Sampling strategies and learning efficiency in text categorization.
In M. Hearst and H. Hirsh, editors, Proceedings of the AAAI Spring Symposium on Machine Learning in Information Access, pages 88-95. AAAI Press, 1996.
Technical Report SS-96-05.


next up previous
Next: About this document ... Up: Integrative Windowing Previous: Acknowledgements