On learning dynamics underlying the evolution of learning rules.

Détails

Ressource 1Télécharger: BIB_E6628D11E4D8.P001.pdf (1631.94 [Ko])
Etat: Public
Version: Final published version
ID Serval
serval:BIB_E6628D11E4D8
Type
Article: article d'un périodique ou d'un magazine.
Collection
Publications
Titre
On learning dynamics underlying the evolution of learning rules.
Périodique
Theoretical Population Biology
Auteur(s)
Dridi S., Lehmann L.
ISSN
1096-0325 (Electronic)
ISSN-L
0040-5809
Statut éditorial
Publié
Date de publication
2014
Peer-reviewed
Oui
Volume
91
Pages
20-36
Langue
anglais
Résumé
In order to understand the development of non-genetically encoded actions during an animal's lifespan, it is necessary to analyze the dynamics and evolution of learning rules producing behavior. Owing to the intrinsic stochastic and frequency-dependent nature of learning dynamics, these rules are often studied in evolutionary biology via agent-based computer simulations. In this paper, we show that stochastic approximation theory can help to qualitatively understand learning dynamics and formulate analytical models for the evolution of learning rules. We consider a population of individuals repeatedly interacting during their lifespan, and where the stage game faced by the individuals fluctuates according to an environmental stochastic process. Individuals adjust their behavioral actions according to learning rules belonging to the class of experience-weighted attraction learning mechanisms, which includes standard reinforcement and Bayesian learning as special cases. We use stochastic approximation theory in order to derive differential equations governing action play probabilities, which turn out to have qualitative features of mutator-selection equations. We then perform agent-based simulations to find the conditions where the deterministic approximation is closest to the original stochastic learning process for standard 2-action 2-player fluctuating games, where interaction between learning rules and preference reversal may occur. Finally, we analyze a simplified model for the evolution of learning in a producer-scrounger game, which shows that the exploration rate can interact in a non-intuitive way with other features of co-evolving learning rules. Overall, our analyses illustrate the usefulness of applying stochastic approximation theory in the study of animal learning.
Mots-clé
Fluctuating environments, Evolutionary game theory, Stochastic approximation, Reinforcement learning, Fictitious play, Producer-scrounger game
Pubmed
Web of science
Création de la notice
19/05/2013 11:42
Dernière modification de la notice
20/08/2019 16:09
Données d'usage