Deep Learning Improves Parameter Estimation in Reinforcement Learning Models

Kavli Affiliate: Marcelo Mattar | Authors: Hua-Dong Xiong, Li Ji-An, Marcelo G Mattar and Robert C Wilson | Summary: Cognitive modeling provides a formal method to articulate and test hypotheses about cognitive processes. However, accurately and reliably estimating model parameters remains challenging due to common issues in behavioral science, such as limited data, measurement noise, […]


Continue.. Deep Learning Improves Parameter Estimation in Reinforcement Learning Models

Age-related Increase in Locus Coeruleus Activity and Connectivity with Prefrontal Cortex during Ambiguity Processing

Kavli Affiliate: Maryam Ziaei | Authors: Arjun Dave, Shuer Ye, Leona R Batz, Xiaqing Lan, Heidi Jacobs and Maryam Ziaei | Summary: Interpreting ambiguous environmental cues, like facial expressions, becomes increasingly challenging with age, especially as cognitive resources decline. Managing these challenges requires adaptive neural mechanisms that are essential for maintaining mental well-being. The locus […]


Continue.. Age-related Increase in Locus Coeruleus Activity and Connectivity with Prefrontal Cortex during Ambiguity Processing

Cortical synaptic vulnerabilities revealed in a α-synuclein aggregation model of Parkinson′s disease

Kavli Affiliate: Michael J Higley | Authors: Saroj Sah, Andrew D Sauerbeck, Jyoti Gupta, Dayana Pérez-Acuña, Jacob E Reiber, Dreson Russell, Thomas Goralski, Michael Henderson, Laura A Volpicelli-Daley, Michael J Higley, Terrance T Kummer and Thomas Biederer | Summary: Cognitive impairment is a frequent non-motor symptom in Parkinson’s disease, and cortical Lewy pathology is strongly […]


Continue.. Cortical synaptic vulnerabilities revealed in a α-synuclein aggregation model of Parkinson′s disease

CausalAbstain: Enhancing Multilingual LLMs with Causal Reasoning for Trustworthy Abstention

Kavli Affiliate: Wei Gao | First 5 Authors: Yuxi Sun, Aoqi Zuo, Wei Gao, Jing Ma, | Summary: Large Language Models (LLMs) often exhibit knowledge disparities across languages. Encouraging LLMs to textit{abstain} when faced with knowledge gaps is a promising strategy to reduce hallucinations in multilingual settings. Current abstention strategies for multilingual scenarios primarily rely […]


Continue.. CausalAbstain: Enhancing Multilingual LLMs with Causal Reasoning for Trustworthy Abstention