Access the full text.
Sign up today, get DeepDyve free for 14 days.
In this paper, we describe how a basic strategy from computational learning theory can be used to attack a class of NP‐hard combinatorial optimization problems. It turns out that the learning strategy can be used as an iterative booster: given a solution to the combinatorial problem, we will start an efficient simulation of a learning algorithm which has a “good chance” to output an improved solution. This boosting technique is a new and surprisingly simple application of an existing learning strategy. It yields a novel heuristic approach to attack NP‐hard optimization problems. It does not apply to each combinatorial problem, but we are able to exactly formalize some sufficient conditions. The new technique applies, for instance, to the problems of minimizing a deterministic finite automaton relative to a given domain, the analogous problem for ordered binary decision diagrams, and to graph coloring.
Annals of Mathematics and Artificial Intelligence – Springer Journals
Published: Oct 4, 2004
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.