Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

State of the art versus classical clustering for unsupervised word sense disambiguation

State of the art versus classical clustering for unsupervised word sense disambiguation This paper ultimately discusses the importance of the clustering method used in unsupervised word sense disambiguation. It illustrates the fact that a powerful clustering technique can make up for lack of external knowledge of all types. It argues that feature selection does not always improve disambiguation results, especially when using an advanced, state of the art method, hereby exemplified by spectral clustering. Disambiguation results obtained when using spectral clustering in the case of the main parts of speech (nouns, adjectives, verbs) are compared to those of the classical clustering method given by the Naïve Bayes model. In the case of unsupervised word sense disambiguation with an underlying Naïve Bayes model feature selection performed in two completely different ways is surveyed. The type of feature selection providing the best results (WordNet-based feature selection) is equally being used in the case of spectral clustering. The conclusion is that spectral clustering without feature selection (but using its own feature weighting) produces superior disambiguation results in the case of all parts of speech. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Artificial Intelligence Review Springer Journals

State of the art versus classical clustering for unsupervised word sense disambiguation

Loading next page...
 
/lp/springer-journals/state-of-the-art-versus-classical-clustering-for-unsupervised-word-vOuwJlC0Iz

References (41)

Publisher
Springer Journals
Copyright
Copyright © 2010 by Springer Science+Business Media B.V.
Subject
Computer Science; Artificial Intelligence (incl. Robotics); Computer Science, general
ISSN
0269-2821
eISSN
1573-7462
DOI
10.1007/s10462-010-9193-7
Publisher site
See Article on Publisher Site

Abstract

This paper ultimately discusses the importance of the clustering method used in unsupervised word sense disambiguation. It illustrates the fact that a powerful clustering technique can make up for lack of external knowledge of all types. It argues that feature selection does not always improve disambiguation results, especially when using an advanced, state of the art method, hereby exemplified by spectral clustering. Disambiguation results obtained when using spectral clustering in the case of the main parts of speech (nouns, adjectives, verbs) are compared to those of the classical clustering method given by the Naïve Bayes model. In the case of unsupervised word sense disambiguation with an underlying Naïve Bayes model feature selection performed in two completely different ways is surveyed. The type of feature selection providing the best results (WordNet-based feature selection) is equally being used in the case of spectral clustering. The conclusion is that spectral clustering without feature selection (but using its own feature weighting) produces superior disambiguation results in the case of all parts of speech.

Journal

Artificial Intelligence ReviewSpringer Journals

Published: Dec 29, 2010

There are no references for this article.