<--- Back to Details
First PageDocument Content
Reinforcement learning / Complex systems theory / Multi-agent systems / Q-learning / Action selection / Machine learning / Agent-based model / Reinforcement / Markov decision process / Statistics / Science / Artificial intelligence
Date: 2009-09-27 12:55:32
Reinforcement learning
Complex systems theory
Multi-agent systems
Q-learning
Action selection
Machine learning
Agent-based model
Reinforcement
Markov decision process
Statistics
Science
Artificial intelligence

MACHINE LEARNING COURSE / Fall[removed]Hierarchical Multi-Agent Reinforcement Learning for Dynamic Coverage Control

Add to Reading List

Source URL: www.cs.sfu.ca

Download Document from Source Website

File Size: 211,17 KB

Share Document on Facebook

Similar Documents

Cooperative Multi-Agent Control Using Deep Reinforcement Learning Jayesh K. Gupta Maxim Egorov

Cooperative Multi-Agent Control Using Deep Reinforcement Learning Jayesh K. Gupta Maxim Egorov

DocID: 1xVVh - View Document

Distributed Computing Prof. R. Wattenhofer SA/MA:  Byzantine Reinforcement Learning

Distributed Computing Prof. R. Wattenhofer SA/MA: Byzantine Reinforcement Learning

DocID: 1xVKs - View Document

Distributed Computing Prof. R. Wattenhofer Generating CAPTCHAs with Deep (Reinforcement) Learning

Distributed Computing Prof. R. Wattenhofer Generating CAPTCHAs with Deep (Reinforcement) Learning

DocID: 1xV3l - View Document

Multi-step Bootstrapping Jennifer She Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto February 7, 2017

Multi-step Bootstrapping Jennifer She Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto February 7, 2017

DocID: 1xUBi - View Document

Cellular	Network	Traffic	Scheduling	 using	Deep	Reinforcement	Learning Sandeep	Chinchali,	et.	al.	Marco	Pavone,	Sachin	Katti Stanford	University	 AAAI	2018

Cellular Network Traffic Scheduling using Deep Reinforcement Learning Sandeep Chinchali, et. al. Marco Pavone, Sachin Katti Stanford University AAAI 2018

DocID: 1xUAT - View Document