The Bandit

Results: 87



#Item
1The N-Tuple Bandit Evolutionary Algorithm for Automatic Game Improvement Kamolwan Kunanusont, Raluca D. Gaina, Jialin Liu, Diego Perez-Liebana and Simon M. Lucas University of Essex, Colchester, UK Email: {kkunan, rdgain

The N-Tuple Bandit Evolutionary Algorithm for Automatic Game Improvement Kamolwan Kunanusont, Raluca D. Gaina, Jialin Liu, Diego Perez-Liebana and Simon M. Lucas University of Essex, Colchester, UK Email: {kkunan, rdgain

Add to Reading List

Source URL: www.diego-perez.net

Language: English - Date: 2017-03-23 19:41:34
    2Supplementary Material for ”Combinatorial multi-armed bandit: general framework, results and applications”, by Wei Chen, Yajun Wang, and Yang Yuan. A. Full proof of Theorem 1 We use the following two well known bound

    Supplementary Material for ”Combinatorial multi-armed bandit: general framework, results and applications”, by Wei Chen, Yajun Wang, and Yang Yuan. A. Full proof of Theorem 1 We use the following two well known bound

    Add to Reading List

    Source URL: proceedings.mlr.press

    Language: English - Date: 2018-07-16 03:38:06
      3Using Bandit Algorithms on Changing Reward Rates Introduction One of the problems we have at System1 is updating our estimate of a feature’s performance over time. Even if our initial estimate is correct, the performan

      Using Bandit Algorithms on Changing Reward Rates Introduction One of the problems we have at System1 is updating our estimate of a feature’s performance over time. Even if our initial estimate is correct, the performan

      Add to Reading List

      Source URL: www.system1.com

      Language: English - Date: 2018-07-13 19:30:08
        4THE NON-BAYESIAN RESTLESS MULTI-ARMED BANDIT: A CASE OF NEAR-LOGARITHMIC REGRET Wenhan Dai†∗ , Yi Gai‡ , Bhaskar Krishnamachari‡ , Qing Zhao§ †  School of Information Science and Technology, Tsinghua Universit

        THE NON-BAYESIAN RESTLESS MULTI-ARMED BANDIT: A CASE OF NEAR-LOGARITHMIC REGRET Wenhan Dai†∗ , Yi Gai‡ , Bhaskar Krishnamachari‡ , Qing Zhao§ † School of Information Science and Technology, Tsinghua Universit

        Add to Reading List

        Source URL: ceng.usc.edu

        Language: English - Date: 2011-10-16 14:12:22
          5Online Algorithms for the Multi-Armed Bandit Problem with Markovian Rewards arXiv:1007.2238v2 [math.OC] 26 JulCem Tekin, Mingyan Liu

          Online Algorithms for the Multi-Armed Bandit Problem with Markovian Rewards arXiv:1007.2238v2 [math.OC] 26 JulCem Tekin, Mingyan Liu

          Add to Reading List

          Source URL: arxiv.org

          Language: English - Date: 2010-07-26 20:13:34
            6Boosting with Online Binary Learners for the Multiclass Bandit Problem  Shang-Tse Chen School of Computer Science, Georgia Institute of Technology, Atlanta, GA  SCHEN 351@ GATECH . EDU

            Boosting with Online Binary Learners for the Multiclass Bandit Problem Shang-Tse Chen School of Computer Science, Georgia Institute of Technology, Atlanta, GA SCHEN 351@ GATECH . EDU

            Add to Reading List

            Source URL: jmlr.org

            - Date: 2014-02-16 19:30:21
              7Approximations of the Restless Bandit Problem ¨ Steffen Grunew¨ alder  S . GRUNEWALDER @ LANCASTER . AC . UK

              Approximations of the Restless Bandit Problem ¨ Steffen Grunew¨ alder S . GRUNEWALDER @ LANCASTER . AC . UK

              Add to Reading List

              Source URL: ewrl.files.wordpress.com

              - Date: 2016-11-21 10:27:48
                8An optimal algorithm for the Thresholding Bandit Problem  Andrea Locatelli Maurilio Gutzeit Alexandra Carpentier Department of Mathematics, University of Potsdam, Germany

                An optimal algorithm for the Thresholding Bandit Problem Andrea Locatelli Maurilio Gutzeit Alexandra Carpentier Department of Mathematics, University of Potsdam, Germany

                Add to Reading List

                Source URL: jmlr.org

                - Date: 2016-10-08 19:36:38
                  9On the Combinatorial Multi-Armed Bandit Problem with Markovian Rewards Yi Gai∗ , Bhaskar Krishnamachari∗ and Mingyan Liu‡ ∗ ‡

                  On the Combinatorial Multi-Armed Bandit Problem with Markovian Rewards Yi Gai∗ , Bhaskar Krishnamachari∗ and Mingyan Liu‡ ∗ ‡

                  Add to Reading List

                  Source URL: www-scf.usc.edu

                  - Date: 2011-07-08 03:24:20
                    10THE NON-BAYESIAN RESTLESS MULTI-ARMED BANDIT: A CASE OF NEAR-LOGARITHMIC REGRET Wenhan Dai†∗ , Yi Gai‡ , Bhaskar Krishnamachari‡ , Qing Zhao§ †  School of Information Science and Technology, Tsinghua Universit

                    THE NON-BAYESIAN RESTLESS MULTI-ARMED BANDIT: A CASE OF NEAR-LOGARITHMIC REGRET Wenhan Dai†∗ , Yi Gai‡ , Bhaskar Krishnamachari‡ , Qing Zhao§ † School of Information Science and Technology, Tsinghua Universit

                    Add to Reading List

                    Source URL: www-scf.usc.edu

                    - Date: 2011-02-13 17:38:25