Computational Learning Theory and Natural Learning Systems, Volume 3

Selecting Good Models

This is the third in a series of edited volumes exploring the evolving landscape of learning systems research which spans theory and experiment, symbols and signals. It continues the exploration of the synthesis of the machine learning subdisciplines begun in volumes I and II. The nineteen contributions cover learning theory, empirical comparisons of learning algorithms, the use of prior knowledge, probabilistic concepts, and the effect of variations over time in the concepts and feedback from the environment.

The goal of this series is to explore the intersection of three historically distinct areas of learning research: computational learning theory, neural networks andAI machine learning. Although each field has its own conferences, journals, language, research, results, and directions, there is a growing intersection and effort to bring these fields into closer coordination.

Can the various communities learn anything from one another? These volumes present research that should be of interest to practitioners of the various subdisciplines of machine learning, addressing questions that are of interest across the range of machine learning approaches, comparing various approaches on specific problems and expanding the theory to cover more realistic cases.

A Bradford Book

Table of Contents

  1. Preface
  2. Introduction
  3. Contributors
  4. I. Using Prior Knowledge
  5. 1. Using Heuristic Search to Expand Knowledge-Based Neural Networks

    David W. Opitz and Jude W. Shavlik

  6. 2. High Accuracy Path Tracking by Neural Linearization Techniques

    Stefan Miesbach

  7. 3. A Preliminary PAC Analysis of Theory Revision

    Raymond J. Mooney

  8. 4. A Knowledge-Based Model of Geometry Learning

    Geoffrey Towell and Richard Lehrer

  9. II. Time-Varying Tasks
  10. 5. Importance-Based Feature Extraction for Reinforcement Learning

    David J. Finton and Yu Hen Hu

  11. 6. A Method for Constructive Learning of Recurrent Neural Networks

    Dong Chen, C. Lee Giles, Gordon Sun, Mark W. Goudzreau, Hsing-Hen Chen and Yee-Chun Lee

  12. 7. Recurrent Neural Networks with Time-dependent Inputs and Outputs

    Volkmar Sterzing and Bernd Schürmann

  13. III. Probabilistic Concepts
  14. 8. Soft Classification, a.k.a. Risk Estimation, via Penalized Log Likelihood and Smoothing Spline Analysis of Variance

    Grace Wahba, Chong Gu, Yuedong Wang and Richard Chappell

  15. 9. Learning with Probabilistic Supervision

    Padhraic Smyth

  16. 10. Reducing the Small Disjuncts Problem by Learning Probabilistic Concept Descriptions

    Kamal M. Ali and Michael J. Pazzani

  17. IV. Theory
  18. 11. On the Bayesian "Occam Factors" Argument for Occam's Razor

    David H. Wolpert

  19. 12. Learning Finite Automata Using Local Distinguishing Experiments

    Wei-Min Shen

  20. 13. PAC-Learnability of Constrained Nonrecursive Logic Programs

    Saso Dzeroski, Stephen Muggleton and Stuart Russell

  21. 14. Analysis of the Blurring Process

    Yizong Cheng and Zhangyong Wan

  22. V. Empirical Comparisons
  23. 15. Learning Context to Disambiguate Word Sense

    Ellen M. Voorhees, Claudia Leacock and Geoffrey Towell

  24. 16. Investigating the Value of a Good Input Representation

    Mark W. Craven and Jude W. Shavlik

  25. 17. Improving Model Selection by Dynamic Regularization Methods

    Ferdinand Hergert, William Finnoff and Hans-Georg Zimmermann

  26. 18. Cross-Validation and Modal Theories

    Timothy L. Bailey and Charles Elkan

  27. 19. An Empirical Investigation of Brute Force to Choose Features, Smoothers and Function Approximators

    Andrew W. Moore, Daniel J. Hill and Michael P. Johnson

  28. References
  29. Index