Reinforcement learning at the interface of artificial intelligence and cognitive science
Тип публикации: Journal Article
Дата публикации: 2025-10-01
scimago Q2
wos Q3
white level БС1
SJR: 1.008
CiteScore: 5.6
Impact factor: 2.8
ISSN: 03064522, 18737544
Краткое описание
Reinforcement learning (RL) is a computational framework that models how agents learn from trial and error to make sequential decisions. Rooted in behavioural psychology, RL has become central to artificial intelligence and is increasingly applied in healthcare to personalize treatment strategies, optimize clinical workflows, guide robotic surgery, and adapt neurorehabilitation. These same properties, learning from outcomes in dynamic and uncertain environments, make RL a powerful lens for modelling human cognition. This review introduces RL to neuroscientists, clinicians, and psychologists, aiming to bridge artificial intelligence and brain science through accessible terminology and clinical analogies. We first outline foundational RL concepts and explain key algorithms such as temporal-difference learning, Q-learning, and policy gradient methods. We then connect RL mechanisms to neurobiological processes, including dopaminergic reward prediction errors, hippocampal replay, and frontostriatal loops, which support learning, planning, and habit formation. RL’s incorporation into cognitive architectures such as ACT-R, SOAR, and CLARION further demonstrates its utility in modelling attention, memory, decision-making, and language. Beyond these foundations, we critically examine RL’s capacity to explain human behaviour, from developmental changes to cognitive biases, and discuss emerging applications of deep RL in simulating complex cognitive tasks. Importantly, we argue that RL should be viewed not only as a modelling tool but as a unifying framework that highlights limitations in current methods and points toward new directions. Our perspective emphasizes hybrid symbolic–subsymbolic models, multi-agent RL for social cognition, and adaptive healthcare applications, offering a roadmap for interdisciplinary research that integrates computation, neuroscience, and clinical practice.
Найдено
Ничего не найдено, попробуйте изменить настройки фильтра.
Для доступа к списку цитирований публикации необходимо авторизоваться.
Топ-30
Журналы
|
1
|
|
|
Technologies
1 публикация, 33.33%
|
|
|
Chaos, Solitons and Fractals
1 публикация, 33.33%
|
|
|
Frontiers in Computational Neuroscience
1 публикация, 33.33%
|
|
|
1
|
Издатели
|
1
|
|
|
MDPI
1 публикация, 33.33%
|
|
|
Elsevier
1 публикация, 33.33%
|
|
|
Frontiers Media S.A.
1 публикация, 33.33%
|
|
|
1
|
- Мы не учитываем публикации, у которых нет DOI.
- Статистика публикаций обновляется еженедельно.
Вы ученый?
Создайте профиль, чтобы получать персональные рекомендации коллег, конференций и новых статей.
Метрики
3
Всего цитирований:
3
Цитирований c 2025:
3
(100%)
Цитировать
ГОСТ |
RIS |
BibTex
Цитировать
ГОСТ
Скопировать
Alkam T., Tarshizi E., Van Benschoten A. H. Reinforcement learning at the interface of artificial intelligence and cognitive science // Neuroscience. 2025. Vol. 585. pp. 289-312.
ГОСТ со всеми авторами (до 50)
Скопировать
Alkam T., Tarshizi E., Van Benschoten A. H. Reinforcement learning at the interface of artificial intelligence and cognitive science // Neuroscience. 2025. Vol. 585. pp. 289-312.
Цитировать
RIS
Скопировать
TY - JOUR
DO - 10.1016/j.neuroscience.2025.09.004
UR - https://linkinghub.elsevier.com/retrieve/pii/S0306452225009182
TI - Reinforcement learning at the interface of artificial intelligence and cognitive science
T2 - Neuroscience
AU - Alkam, Tursun
AU - Tarshizi, Ebrahim
AU - Van Benschoten, Andrew H.
PY - 2025
DA - 2025/10/01
PB - Elsevier
SP - 289-312
VL - 585
SN - 0306-4522
SN - 1873-7544
ER -
Цитировать
BibTex (до 50 авторов)
Скопировать
@article{2025_Alkam,
author = {Tursun Alkam and Ebrahim Tarshizi and Andrew H. Van Benschoten},
title = {Reinforcement learning at the interface of artificial intelligence and cognitive science},
journal = {Neuroscience},
year = {2025},
volume = {585},
publisher = {Elsevier},
month = {oct},
url = {https://linkinghub.elsevier.com/retrieve/pii/S0306452225009182},
pages = {289--312},
doi = {10.1016/j.neuroscience.2025.09.004}
}
Ошибка в публикации?