A model for the evolution of reinforcement learning in fluctuating games

Détails

Cette publication est une ancienne version. Cette notice est remplacée par serval:BIB_AC26C188AEA1
Ressource 1Télécharger: BIB_E036279061EF.P001.pdf (7159.61 [Ko])
Etat: Public
Version: de l'auteur⸱e
ID Serval
serval:BIB_E036279061EF
Type
Article: article d'un périodique ou d'un magazine.
Collection
Publications
Institution
Titre
A model for the evolution of reinforcement learning in fluctuating games
Périodique
Animal Behaviour
Auteur⸱e⸱s
Dridi S., Lehmann L.
ISSN
1095-8282
ISSN-L
0003-3472
Statut éditorial
Publié
Date de publication
2015
Peer-reviewed
Oui
Volume
104
Pages
87-114
Langue
anglais
Résumé
Many species are able to learn to associate behaviours with rewards as this gives fitness advantages in changing environments. Social interactions between population members may, however, require more cognitive abilities than simple trial-and-error learning, in particular the capacity to make accurate hypotheses about the material payoff consequences of alternative action combinations. It is unclear in this context whether natural selection necessarily favours individuals to use information about payoffs associated with nontried actions (hypothetical payoffs), as opposed to simple reinforcement of realized payoff. Here, we develop an evolutionary model in which individuals are genetically determined to use either trial-and-error learning or learning based on hypothetical reinforcements, and ask what is the evolutionarily stable learning rule under pairwise symmetric two-action stochastic repeated games played over the individual's lifetime. We analyse through stochastic approximation theory and simulations the learning dynamics on the behavioural timescale, and derive conditions where trial-and-error learning outcompetes hypothetical reinforcement learning on the evolutionary timescale. This occurs in particular under repeated cooperative interactions with the same partner. By contrast, we find that hypothetical reinforcement learners tend to be favoured under random interactions, but stable polymorphisms can also obtain where trial-and-error learners are maintained at a low frequency. We conclude that specific game structures can select for trial-and-error learning even in the absence of costs of cognition, which illustrates that cost-free increased cognition can be counterselected under social interactions.
Mots-clé
evolution of cognition, evolutionarily stable learning rules, exploration-exploitation trade-off, repeated games, social interactions, trial-and-error learning
Web of science
Open Access
Oui
Création de la notice
23/01/2015 19:14
Dernière modification de la notice
30/10/2023 10:00
Données d'usage