“LazImpa”: Lazy and Impatient neural agents learn to communicate efficiently
Citations Over TimeTop 1% of 2020 papers
Abstract
Previous work has shown that artificial neural agents naturally develop surprisingly nonefficient codes. This is illustrated by the fact that in a referential game involving a speaker and a listener neural networks optimizing accurate transmission over a discrete channel, the emergent messages fail to achieve an optimal length. Furthermore, frequent messages tend to be longer than infrequent ones, a pattern contrary to the Zipf Law of Abbreviation (ZLA) observed in all natural languages. Here, we show that near-optimal and ZLA-compatible messages can emerge, but only if both the speaker and the listener are modified. We hence introduce a new communication system, "Laz-Impa", where the speaker is made increasingly lazy, i.e., avoids long messages, and the listener impatient, i.e., seeks to guess the intended content as soon as possible.
Related Papers
- → Deviation of Zipf's and Heaps' Laws in Human Languages with Limited Dictionary Sizes(2013)56 cited
- → Emergence of Zipf’s law in the evolution of communication(2011)48 cited
- → Unzipping Zipf’s law(2017)38 cited
- Uneven landscapes and the city size distribution(2010)