An actor-model framework for visual sensory encoding.
Détails
Télécharger: 2024_Leong_NatCommun.pdf (2431.97 [Ko])
Etat: Public
Version: Final published version
Licence: CC BY 4.0
Etat: Public
Version: Final published version
Licence: CC BY 4.0
ID Serval
serval:BIB_834BAD52FD46
Type
Article: article d'un périodique ou d'un magazine.
Collection
Publications
Institution
Titre
An actor-model framework for visual sensory encoding.
Périodique
Nature communications
ISSN
2041-1723 (Electronic)
ISSN-L
2041-1723
Statut éditorial
Publié
Date de publication
27/01/2024
Peer-reviewed
Oui
Volume
15
Numéro
1
Pages
808
Langue
anglais
Notes
Publication types: Journal Article
Publication Status: epublish
Publication Status: epublish
Résumé
A fundamental challenge in neuroengineering is determining a proper artificial input to a sensory system that yields the desired perception. In neuroprosthetics, this process is known as artificial sensory encoding, and it holds a crucial role in prosthetic devices restoring sensory perception in individuals with disabilities. For example, in visual prostheses, one key aspect of artificial image encoding is to downsample images captured by a camera to a size matching the number of inputs and resolution of the prosthesis. Here, we show that downsampling an image using the inherent computation of the retinal network yields better performance compared to learning-free downsampling methods. We have validated a learning-based approach (actor-model framework) that exploits the signal transformation from photoreceptors to retinal ganglion cells measured in explanted mouse retinas. The actor-model framework generates downsampled images eliciting a neuronal response in-silico and ex-vivo with higher neuronal reliability than the one produced by a learning-free approach. During the learning process, the actor network learns to optimize contrast and the kernel's weights. This methodological approach might guide future artificial image encoding strategies for visual prostheses. Ultimately, this framework could be applicable for encoding strategies in other sensory prostheses such as cochlear or limb.
Mots-clé
Mice, Animals, Reproducibility of Results, Retina, Retinal Ganglion Cells/physiology, Learning/physiology, Visual Prosthesis, Visual Perception/physiology
Pubmed
Web of science
Open Access
Oui
Création de la notice
12/02/2024 15:12
Dernière modification de la notice
09/08/2024 14:52