Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

An incremental learning preprocessor for feed-forward neural network

An incremental learning preprocessor for feed-forward neural network Outpost Vector model synthesizes new vectors from two classes of data at their boundary to maintain the shape of the current system in order to increase the level of accuracy of classification. This paper presents an incremental learning preprocessor for Feed-forward Neural Network (FFNN) which utilizes Outpost Vector model to improve the level of accuracy of classification of both new data and old data. The preprocessor generates outpost vectors from selected new samples, selected prior samples, both samples, or generates no outpost vector at all. After that, they are included in the final training set, as well as selected new samples and selected prior samples, based on the specified parameters. The final training set is then trained with FFNN. The whole process is repeated again when new samples are sufficiently collected in order to learn newer knowledge. The experiments are conducted with a 2-dimension partition problem. The distribution of training and test samples is created in a limited location of a 2-dimension donut ring. The context of the problem is assumed to shift 45° in counterclockwise direction. There are two classes of data which are represented as 0 and 1. Every consecutive partition is set to have different class of both new data and old data. The experimental results show that the use of outpost vectors generated from either selected new samples or selected prior or both samples helps improve the level of accuracy of classification for all data. The run-time complexity of the algorithm presents that the overhead from outpost vector generation process is insignificant and is compensated by the improved level of accuracy of classification. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Artificial Intelligence Review Springer Journals

An incremental learning preprocessor for feed-forward neural network

Artificial Intelligence Review , Volume 41 (2) – Jan 6, 2012

Loading next page...
 
/lp/springer-journals/an-incremental-learning-preprocessor-for-feed-forward-neural-network-aRXMDHfqDj

References (4)

Publisher
Springer Journals
Copyright
Copyright © 2012 by Springer Science+Business Media B.V.
Subject
Computer Science; Artificial Intelligence (incl. Robotics); Computer Science, general
ISSN
0269-2821
eISSN
1573-7462
DOI
10.1007/s10462-011-9304-0
Publisher site
See Article on Publisher Site

Abstract

Outpost Vector model synthesizes new vectors from two classes of data at their boundary to maintain the shape of the current system in order to increase the level of accuracy of classification. This paper presents an incremental learning preprocessor for Feed-forward Neural Network (FFNN) which utilizes Outpost Vector model to improve the level of accuracy of classification of both new data and old data. The preprocessor generates outpost vectors from selected new samples, selected prior samples, both samples, or generates no outpost vector at all. After that, they are included in the final training set, as well as selected new samples and selected prior samples, based on the specified parameters. The final training set is then trained with FFNN. The whole process is repeated again when new samples are sufficiently collected in order to learn newer knowledge. The experiments are conducted with a 2-dimension partition problem. The distribution of training and test samples is created in a limited location of a 2-dimension donut ring. The context of the problem is assumed to shift 45° in counterclockwise direction. There are two classes of data which are represented as 0 and 1. Every consecutive partition is set to have different class of both new data and old data. The experimental results show that the use of outpost vectors generated from either selected new samples or selected prior or both samples helps improve the level of accuracy of classification for all data. The run-time complexity of the algorithm presents that the overhead from outpost vector generation process is insignificant and is compensated by the improved level of accuracy of classification.

Journal

Artificial Intelligence ReviewSpringer Journals

Published: Jan 6, 2012

There are no references for this article.