Bandit

Results: 280



#Item
11

Practical Evaluation and Optimization of Contextual Bandit Algorithms Alberto Bietti, Alekh Agarwal, John Langford To cite this version: Alberto Bietti, Alekh Agarwal, John Langford. Practical Evaluation and Optimization

Add to Reading List

Source URL: hal.inria.fr

- Date: 2018-03-30 23:30:03
    12

    Boosting with Online Binary Learners for the Multiclass Bandit Problem Shang-Tse Chen School of Computer Science, Georgia Institute of Technology, Atlanta, GA SCHEN 351@ GATECH . EDU

    Add to Reading List

    Source URL: jmlr.org

    - Date: 2014-02-16 19:30:21
      13

      Character Classes Quick Reference BANDIT GYPSY

      Add to Reading List

      Source URL: engineoforacles.files.wordpress.com

      - Date: 2016-08-17 15:33:12
        14

        Approximations of the Restless Bandit Problem ¨ Steffen Grunew¨ alder S . GRUNEWALDER @ LANCASTER . AC . UK

        Add to Reading List

        Source URL: ewrl.files.wordpress.com

        - Date: 2016-11-21 10:27:48
          15

          Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization Lisha Li

          Add to Reading List

          Source URL: arxiv.org

          - Date: 2016-11-23 20:48:17
            16

            Kernel-based methods for bandit convex optimization arXiv:1607.03084v1 [cs.LG] 11 Jul 2016 S´ebastien Bubeck

            Add to Reading List

            Source URL: arxiv.org

            - Date: 2016-07-11 20:36:37
              17

              Stochastic Multi-Armed-Bandit Problem with Non-stationary Rewards Yonatan Gur Stanford University Stanford, CA

              Add to Reading List

              Source URL: papers.nips.cc

              - Date: 2014-12-02 18:46:52
                18

                An optimal algorithm for the Thresholding Bandit Problem Andrea Locatelli Maurilio Gutzeit Alexandra Carpentier Department of Mathematics, University of Potsdam, Germany

                Add to Reading List

                Source URL: jmlr.org

                - Date: 2016-10-08 19:36:38
                  19

                  JMLR: Workshop and Conference Proceedings vol–25th Annual Conference on Learning Theory Towards Minimax Policies for Online Linear Optimization with Bandit Feedback

                  Add to Reading List

                  Source URL: www.jmlr.org

                  - Date: 2012-06-17 06:50:54
                    20

                    On the Combinatorial Multi-Armed Bandit Problem with Markovian Rewards Yi Gai∗ , Bhaskar Krishnamachari∗ and Mingyan Liu‡ ∗ ‡

                    Add to Reading List

                    Source URL: www-scf.usc.edu

                    - Date: 2011-07-08 03:24:20
                      UPDATE