Open Access
NeuroImage, volume 221, pages 117143
Object recognition is enabled by an experience-dependent appraisal of visual features in the brain’s value system
Kozunov Vladimir V
1
,
West T.
2, 3
,
Nikolaeva Anastasia
1
,
Stroganova Tatiana A.
1
,
Friston Karl J.
3
Publication type: Journal Article
Publication date: 2020-11-01
Neurology
Cognitive Neuroscience
Abstract
This paper addresses perceptual synthesis by comparing responses evoked by visual stimuli before and after they are recognized, depending on prior exposure. Using magnetoencephalography, we analyzed distributed patterns of neuronal activity – evoked by Mooney figures – before and after they were recognized as meaningful objects. Recognition induced changes were first seen at 100–120 ms, for both faces and tools. These early effects – in right inferior and middle occipital regions – were characterized by an increase in power in the absence of any changes in spatial patterns of activity. Within a later 210–230 ms window, a quite different type of recognition effect appeared. Regions of the brain’s value system (insula, entorhinal cortex and cingulate of the right hemisphere for faces and right orbitofrontal cortex for tools) evinced a reorganization of their neuronal activity without an overall power increase in the region. Finally, we found that during the perception of disambiguated face stimuli, a face-specific response in the right fusiform gyrus emerged at 240–290 ms, with a much greater latency than the well-known N170m component, and, crucially, followed the recognition effect in the value system regions. These results can clarify one of the most intriguing issues of perceptual synthesis, namely, how a limited set of high-level predictions, which is required to reduce the uncertainty when resolving the ill-posed inverse problem of perception, can be available before category-specific processing in visual cortex. We suggest that a subset of local spatial features serves as partial cues for a fast re-activation of object-specific appraisal by the value system. The ensuing top-down feedback from value system to visual cortex, in particular, the fusiform gyrus enables high levels of processing to form category-specific predictions. This descending influence of the value system was more prominent for faces than for tools, the fact that reflects different dependence of these categories on value-related information.
Citations by journals
1
|
|
Journal of Vision
|
Journal of Vision
1 publication, 33.33%
|
Biological Cybernetics
|
Biological Cybernetics
1 publication, 33.33%
|
Perspectives on Psychological Science
|
Perspectives on Psychological Science
1 publication, 33.33%
|
1
|
Citations by publishers
1
|
|
Association for Research in Vision and Ophthalmology (ARVO)
|
Association for Research in Vision and Ophthalmology (ARVO)
1 publication, 33.33%
|
Springer Nature
|
Springer Nature
1 publication, 33.33%
|
SAGE
|
SAGE
1 publication, 33.33%
|
1
|
- We do not take into account publications that without a DOI.
- Statistics recalculated only for publications connected to researchers, organizations and labs registered on the platform.
- Statistics recalculated weekly.
{"yearsCitations":{"type":"bar","data":{"show":true,"labels":[2021,2022,2023],"ids":[0,0,0],"codes":[0,0,0],"imageUrls":["","",""],"datasets":[{"label":"Citations number","data":[2,0,1],"backgroundColor":["#3B82F6","#3B82F6","#3B82F6"],"percentage":["66.67",0,"33.33"],"barThickness":null}]},"options":{"indexAxis":"x","maintainAspectRatio":true,"scales":{"y":{"ticks":{"precision":0,"autoSkip":false,"font":{"family":"Montserrat"},"color":"#000000"}},"x":{"ticks":{"stepSize":1,"precision":0,"font":{"family":"Montserrat"},"color":"#000000"}}},"plugins":{"legend":{"position":"top","labels":{"font":{"family":"Montserrat"},"color":"#000000"}},"title":{"display":true,"text":"Citations per year","font":{"size":24,"family":"Montserrat","weight":600},"color":"#000000"}}}},"journals":{"type":"bar","data":{"show":true,"labels":["Journal of Vision","Biological Cybernetics","Perspectives on Psychological Science"],"ids":[5829,23918,1943],"codes":[0,0,0],"imageUrls":["\/storage\/images\/resized\/5UMgKnFqPQsuIMt03SiGJWWld9jmDgeekKAEN0Wm_medium.webp","\/storage\/images\/resized\/voXLqlsvTwv5p3iMQ8Dhs95nqB4AXOG7Taj7G4ra_medium.webp","\/storage\/images\/resized\/ruydfaB80LDjlkYqsfOeUAZohOIODyq7bQzis5O7_medium.webp"],"datasets":[{"label":"","data":[1,1,1],"backgroundColor":["#3B82F6","#3B82F6","#3B82F6"],"percentage":[33.33,33.33,33.33],"barThickness":13}]},"options":{"indexAxis":"y","maintainAspectRatio":false,"scales":{"y":{"ticks":{"precision":0,"autoSkip":false,"font":{"family":"Montserrat"},"color":"#000000"}},"x":{"ticks":{"stepSize":null,"precision":0,"font":{"family":"Montserrat"},"color":"#000000"}}},"plugins":{"legend":{"position":"top","labels":{"font":{"family":"Montserrat"},"color":"#000000"}},"title":{"display":true,"text":"Journals","font":{"size":24,"family":"Montserrat","weight":600},"color":"#000000"}}}},"publishers":{"type":"bar","data":{"show":true,"labels":["Association for Research in Vision and Ophthalmology (ARVO)","Springer Nature","SAGE"],"ids":[1778,8,31],"codes":[0,0,0],"imageUrls":["\/storage\/images\/resized\/5UMgKnFqPQsuIMt03SiGJWWld9jmDgeekKAEN0Wm_medium.webp","\/storage\/images\/resized\/voXLqlsvTwv5p3iMQ8Dhs95nqB4AXOG7Taj7G4ra_medium.webp","\/storage\/images\/resized\/ruydfaB80LDjlkYqsfOeUAZohOIODyq7bQzis5O7_medium.webp"],"datasets":[{"label":"","data":[1,1,1],"backgroundColor":["#3B82F6","#3B82F6","#3B82F6"],"percentage":[33.33,33.33,33.33],"barThickness":13}]},"options":{"indexAxis":"y","maintainAspectRatio":false,"scales":{"y":{"ticks":{"precision":0,"autoSkip":false,"font":{"family":"Montserrat"},"color":"#000000"}},"x":{"ticks":{"stepSize":null,"precision":0,"font":{"family":"Montserrat"},"color":"#000000"}}},"plugins":{"legend":{"position":"top","labels":{"font":{"family":"Montserrat"},"color":"#000000"}},"title":{"display":true,"text":"Publishers","font":{"size":24,"family":"Montserrat","weight":600},"color":"#000000"}}}}}
Metrics
Cite this
GOST |
RIS |
BibTex
Cite this
GOST
Copy
Kozunov V. V. et al. Object recognition is enabled by an experience-dependent appraisal of visual features in the brain’s value system // NeuroImage. 2020. Vol. 221. p. 117143.
GOST all authors (up to 50)
Copy
Kozunov V. V., West T., Nikolaeva A., Stroganova T. A., Friston K. J. Object recognition is enabled by an experience-dependent appraisal of visual features in the brain’s value system // NeuroImage. 2020. Vol. 221. p. 117143.
Cite this
RIS
Copy
TY - JOUR
DO - 10.1016/j.neuroimage.2020.117143
UR - https://doi.org/10.1016%2Fj.neuroimage.2020.117143
TI - Object recognition is enabled by an experience-dependent appraisal of visual features in the brain’s value system
T2 - NeuroImage
AU - Kozunov, Vladimir V
AU - West, T.
AU - Nikolaeva, Anastasia
AU - Stroganova, Tatiana A.
AU - Friston, Karl J.
PY - 2020
DA - 2020/11/01 00:00:00
PB - Elsevier
SP - 117143
VL - 221
SN - 1053-8119
SN - 1095-9572
ER -
Cite this
BibTex
Copy
@article{2020_Kozunov,
author = {Vladimir V Kozunov and T. West and Anastasia Nikolaeva and Tatiana A. Stroganova and Karl J. Friston},
title = {Object recognition is enabled by an experience-dependent appraisal of visual features in the brain’s value system},
journal = {NeuroImage},
year = {2020},
volume = {221},
publisher = {Elsevier},
month = {nov},
url = {https://doi.org/10.1016%2Fj.neuroimage.2020.117143},
pages = {117143},
doi = {10.1016/j.neuroimage.2020.117143}
}