Access the full text.
Sign up today, get DeepDyve free for 14 days.
M. Kearns, L. Pitt (1989)
A polynomial-time algorithm for learning k-variable pattern languages from examples
Steffen Lange, T. Zeugmann (1996)
Incremental Learning from Positive DataJ. Comput. Syst. Sci., 53
E. Gold (1967)
Language Identification in the LimitInf. Control., 10
Assaf Marron (1988)
Learning pattern languages from a single initial example and from queries
T. Zeugmann, Steffen Lange, S. Kapur (1995)
Characterizations of Monotonic and Dual Monotonic Language LearningInf. Comput., 120
Steffen Lange, T. Zeugmann (1994)
Set-driven and rearrangement-independent learning of recursive languagesMathematical systems theory, 29
Steffen Lange, T. Zeugmann (1991)
Monotonic Versus Nonmonotonic Language Learning
R. Nix (1984)
Editing by exampleProceedings of the 11th ACM SIGACT-SIGPLAN symposium on Principles of programming languages
A. Salomaa (1995)
Return to PatternsBull. EATCS, 55
Hiroki Arimura, H. Ishizaka, T. Shinohara (1995)
Learning Unions of Tree Patterns Using Queries
T. Shinohara, S. Arikawa (1995)
Pattern Inference
S. Lange, R. Wiehagen (1991)
Polynomial-time inference of arbitrary pattern languagesNew Generation Computing, 8
D. Angluin (1980)
Finding Patterns Common to a Set of StringsJ. Comput. Syst. Sci., 21
K. Ko, A. Marron, W.G. Tzeng (1990)
Proc. 7th Conference on Machine Learning
K. Ko, Assaf Marron, Wen-Guey Tzeng (1990)
Learning String Patterns and Tree Patterns from Examples
T. Shinohara (1983)
Polynomial Time Inference of Extended Regular Pattern Languages
R. Daley, C.H. Smith (1986)
On the complexity of inductive inferenceInformation and Control, 69
(1989)
Concrete Mathematics
R. Schapire (1990)
Pattern languages are not learnable
P. Kilpeläinen, H. Mannila, E. Ukkonen (1995)
MDL learning of unions of simple pattern languages from positive examples
M. Kearns, L. Pitt (1989)
Proc. 2nd Annual ACM Workshop on Computational Learning Theory
G. Pólya, R. Tarjan, D. Woods (1983)
Notes on Introductory Combinatorics, 4
Rolf Wiehagen, T. Zeugmann (1994)
Ignoring data may be the only way to learn efficientlyJ. Exp. Theor. Artif. Intell., 6
(1983)
Learning data entry systems: An application of inductive inference of pattern languages
Tao Jiang, A. Salomaa, K. Salomaa, Sheng Yu (1993)
Inclusion is Undecidable for Pattern Languages
D. Angluin (1988)
Queries and concept learningMachine Learning, 2
S. Shimozono, 下薗 真一, A. Shinohara, 篠原 歩, T. Shinohara, 篠原 武, S. Miyano, 宮野 悟, S. Kuhara, 久原 哲, S. Arikawa, 有川 節夫 (1992)
Knowledge Acquisition from Amino Acid Sequences by Machine Learning System BONSAI, 60
(1993)
Lecture Notes in Articial Intelligence
(1983)
Birkh auser
J. Hopcroft, J. Ullman (1969)
Formal languages and their relation to automata
K. Wexler, P. Culicover (1980)
Formal Principles of Language Acquisition
A. Salomaa (1994)
Return to patterns (The Formal Language Theory Column)EATCS Bulletin, 55
Steffen Lange, T. Zeugmann (1992)
Types of monotonic language learning and their characterization
(1994)
Patterns (The Formal Language Theory Column)
The present paper deals with the best-case, worst-case and average-case behavior of Lange and Wiehagen's (1991) pattern language learning algorithm with respect to its total learning time. Pattern languages have been introduced by Angluin (1980) and are defined as follows: Let $$\mathcal{A} = \{ 0,1,...\} $$ be any non-empty finite alphabet containing at least two elements. Furthermore, let $$X = \{ x|i \in \mathbb{N}\} $$ be an infinite set of variables such that $$\mathcal{A} \cap X = \emptyset $$ . Patterns are non-empty strings over $$\mathcal{A} \cap X$$ . L(π), the language generated by pattern π, is the set of strings which can be obtained by substituting non-null strings from $$\mathcal{A}^ * $$ for the variables of the pattern π. Lange and Wiehagen's (1991) algorithm learns the class of all pattern languages in the limit from text. We analyze this algorithm with respect to its total learning time behavior, i.e., the overall time taken by the algorithm until convergence. For every pattern π containing k different variables it is shown that the total learning time is $$O(\left| \pi \right|^2 \log _{\left| \mathcal{A} \right|} (\left| \mathcal{A} \right| + k))$$ in the best-case and unbounded in the worst-case. Furthermore, we estimate the expectation of the total learning time. In particular, it is shown that Lange and Wiehagen's algorithm possesses an expected total learning time of $$O(2^k k^2 \left| \pi \right|^2 \log _{\left| \mathcal{A} \right|} (k\left| \mathcal{A} \right|))$$ with respect to the uniform distribution.
Annals of Mathematics and Artificial Intelligence – Springer Journals
Published: Oct 15, 2004
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.