Access the full text.
Sign up today, get DeepDyve free for 14 days.
G. Widmer, M. Kubát (1992)
Learning Flexible Concepts from Streams of Examples: FLORA 2
M. McCloskey, N. Cohen (1989)
Catastrophic Interference in Connectionist Networks: The Sequential Learning ProblemPsychology of Learning and Motivation, 24
J. Schlimmer, R. Granger (2004)
Incremental learning from noisy dataMachine Learning, 1
Avrim Blum (1995)
Empirical Support for Winnow and Weighted-Majority Based Algorithms: Results on a Calendar Scheduling Domain
D. Helmbold, Philip Long (1991)
Tracking drifting concepts using random examples
A. Kuh, T. Petsche, R. Rivest (1990)
Learning Time-Varying Concepts
D. Aha, D. Kibler, M. Albert (2004)
Instance-based learning algorithmsMachine Learning, 6
G. Widmer, M. Kubát (1993)
Effective Learning in Dynamic Environments by Explicit Context Tracking
P. Maes, R. Brooks (1990)
Learning to Coordinate Behaviors
A. Moore (1991)
Fast, Robust Adaptive Control by Learning only Forward Models
M. Salganicoff (1993)
Proceedings of the Tenth International Conference on Machine Learning
L. G. Valiant (1984)
A Theory of the LearnableCommunications of the ACM, 27
C. Atkeson (1991)
Using locally weighted regression for robot learningProceedings. 1991 IEEE International Conference on Robotics and Automation
R. Ratcliff (1990)
Connectionist models of recognition memory: constraints imposed by learning and forgetting functions.Psychological review, 97 2
D. Kendall (1947)
Introduction to Mathematical StatisticsNature, 160
M. Salganicoff (1992)
Learning and forgetting for perception-action: a projection pursuit and density adaptive approach
J. Friedman, J. Bentley, R. Finkel (1976)
An Algorithm for Finding Best Matches in Logarithmic Expected TimeACM Trans. Math. Softw., 3
(2002)
A New Approach to Linear Filtering and Prediction Problems
David Editor
Artificial Intelligence and Language Processing a Theory of the Learnable
Y. Chien (1974)
Pattern classification and scene analysisIEEE Transactions on Automatic Control, 19
R. Duda, P. Hart (1974)
Pattern classification and scene analysis
A. Moore (1990)
Acquisition of Dynamic Control Knowledge for a Robotic Manipulator
M. McCloskey, N. J. Cohen (1989)
The Psychology of Learning and Motivation
M. Salganicoff (1993)
Density-Adaptive Learning and Forgetting
In their unmodified form, lazy-learning algorithms may have difficulty learning and tracking time-varying input/output function maps such as those that occur in concept shift. Extensions of these algorithms, such as Time-Windowed forgetting (TWF), can permit learning of time-varying mappings by deleting older exemplars, but have decreased classification accuracy when the input-space sampling distribution of the learning set is time-varying. Additionally, TWF suffers from lower asymptotic classification accuracy than equivalent non-forgetting algorithms when the input sampling distributions are stationary. Other shift-sensitive algorithms, such as Locally-Weighted forgetting (LWF) avoid the negative effects of time-varying sampling distributions, but still have lower asymptotic classification in non-varying cases. We introduce Prediction Error Context Switching (PECS) which allows lazy-learning algorithms to have good classification accuracy in conditions having a time-varying function mapping and input sampling distributions, while still maintaining their asymptotic classification accuracy in static tasks. PECS works by selecting and re-activating previously stored instances based on their most recent consistency record. The classification accuracy and active learning set sizes for the above algorithms are compared in a set of learning tasks that illustrate the differing time-varying conditions described above. The results show that the PECS algorithm has the best overall classification accuracy over these differing time-varying conditions, while still having asymptotic classification accuracy competitive with unmodified lazy-learners intended for static environments.
Artificial Intelligence Review – Springer Journals
Published: Oct 15, 2004
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.