Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Multi-armed bandits with episode context

Multi-armed bandits with episode context A multi-armed bandit episode consists of n trials, each allowing selection of one of K arms, resulting in payoff from a distribution over [0,1] associated with that arm. We assume contextual side information is available at the start of the episode. This context enables an arm predictor to identify possible favorable arms, but predictions may be imperfect so that they need to be combined with further exploration during the episode. Our setting is an alternative to classical multi-armed bandits which provide no contextual side information, and is also an alternative to contextual bandits which provide new context each individual trial. Multi-armed bandits with episode context can arise naturally, for example in computer Go where context is used to bias move decisions made by a multi-armed bandit algorithm. The UCB1 algorithm for multi-armed bandits achieves worst-case regret bounded by $O\left(\sqrt{Kn\log(n)}\right)$ . We seek to improve this using episode context, particularly in the case where K is large. Using a predictor that places weight M i  > 0 on arm i with weights summing to 1, we present the PUCB algorithm which achieves regret $O\left(\frac{1}{M_{\ast}}\sqrt{n\log(n)}\right)$ where M  ∗  is the weight on the optimal arm. We illustrate the behavior of PUCB with small simulation experiments, present extensions that provide additional capabilities for PUCB, and describe methods for obtaining suitable predictors for use with PUCB. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Annals of Mathematics and Artificial Intelligence Springer Journals

Multi-armed bandits with episode context

Loading next page...
 
/lp/springer-journals/multi-armed-bandits-with-episode-context-gewcGHSCYy

References (45)

Publisher
Springer Journals
Copyright
Copyright © 2011 by Springer Science+Business Media B.V.
Subject
Computer Science; Mathematics, general; Artificial Intelligence (incl. Robotics); Statistical Physics, Dynamical Systems and Complexity; Computer Science, general
ISSN
1012-2443
eISSN
1573-7470
DOI
10.1007/s10472-011-9258-6
Publisher site
See Article on Publisher Site

Abstract

A multi-armed bandit episode consists of n trials, each allowing selection of one of K arms, resulting in payoff from a distribution over [0,1] associated with that arm. We assume contextual side information is available at the start of the episode. This context enables an arm predictor to identify possible favorable arms, but predictions may be imperfect so that they need to be combined with further exploration during the episode. Our setting is an alternative to classical multi-armed bandits which provide no contextual side information, and is also an alternative to contextual bandits which provide new context each individual trial. Multi-armed bandits with episode context can arise naturally, for example in computer Go where context is used to bias move decisions made by a multi-armed bandit algorithm. The UCB1 algorithm for multi-armed bandits achieves worst-case regret bounded by $O\left(\sqrt{Kn\log(n)}\right)$ . We seek to improve this using episode context, particularly in the case where K is large. Using a predictor that places weight M i  > 0 on arm i with weights summing to 1, we present the PUCB algorithm which achieves regret $O\left(\frac{1}{M_{\ast}}\sqrt{n\log(n)}\right)$ where M  ∗  is the weight on the optimal arm. We illustrate the behavior of PUCB with small simulation experiments, present extensions that provide additional capabilities for PUCB, and describe methods for obtaining suitable predictors for use with PUCB.

Journal

Annals of Mathematics and Artificial IntelligenceSpringer Journals

Published: Aug 26, 2011

There are no references for this article.