<--- Back to Details
First PageDocument Content
Operations research / Science / Dynamic programming / Markov processes / Stochastic control / Reinforcement learning / Mechanism design / Markov decision process / Vickrey–Clarke–Groves auction / Statistics / Control theory / Game theory
Date: 2005-01-05 13:04:15
Operations research
Science
Dynamic programming
Markov processes
Stochastic control
Reinforcement learning
Mechanism design
Markov decision process
Vickrey–Clarke–Groves auction
Statistics
Control theory
Game theory

Approximately Efficient Online Mechanism Design David C. Parkes DEAS, Maxwell-Dworkin Harvard University

Add to Reading List

Source URL: www.eecs.harvard.edu

Download Document from Source Website

File Size: 132,16 KB

Share Document on Facebook

Similar Documents

Cooperative Multi-Agent Control Using Deep Reinforcement Learning Jayesh K. Gupta Maxim Egorov

Cooperative Multi-Agent Control Using Deep Reinforcement Learning Jayesh K. Gupta Maxim Egorov

DocID: 1xVVh - View Document

Distributed Computing Prof. R. Wattenhofer SA/MA:  Byzantine Reinforcement Learning

Distributed Computing Prof. R. Wattenhofer SA/MA: Byzantine Reinforcement Learning

DocID: 1xVKs - View Document

Distributed Computing Prof. R. Wattenhofer Generating CAPTCHAs with Deep (Reinforcement) Learning

Distributed Computing Prof. R. Wattenhofer Generating CAPTCHAs with Deep (Reinforcement) Learning

DocID: 1xV3l - View Document

Multi-step Bootstrapping Jennifer She Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto February 7, 2017

Multi-step Bootstrapping Jennifer She Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto February 7, 2017

DocID: 1xUBi - View Document

Cellular	Network	Traffic	Scheduling	 using	Deep	Reinforcement	Learning Sandeep	Chinchali,	et.	al.	Marco	Pavone,	Sachin	Katti Stanford	University	 AAAI	2018

Cellular Network Traffic Scheduling using Deep Reinforcement Learning Sandeep Chinchali, et. al. Marco Pavone, Sachin Katti Stanford University AAAI 2018

DocID: 1xUAT - View Document