Value-Decomposition Networks For Cooperative Multi-Agent Learning Based On Team Reward
2018pp. 2085–2087
Citations Over TimeTop 1% of 2018 papers
Peter Sunehag, Guy Lever, Audrūnas Gruslys, Wojciech Marian Czarnecki, Vinícius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z. Leibo, Karl Tuyls, Thore Graepel
Abstract
We study the problem of cooperative multi-agent reinforcement learning with a single joint reward signal. This class of learning problems is difficult because of the often large combined action and observation spaces. In the fully centralized and decentralized approaches, we find the problem of spurious rewards and a phenomenon we call the "lazy agent'' problem, which arises due to partial observability. We address these problems by training individual agents with a novel value-decomposition network architecture, which learns to decompose the team value function into agent-wise value functions.
Related Papers
- Learning to Communicate with Deep Multi−Agent Reinforcement Learning(2016)