Large-scale Foundation Models and Generative AI for BigData Neuroscience

Kavli Affiliate: Ran Wang

| First 5 Authors: Ran Wang, Zhe Sage Chen, , ,

| Summary:

Recent advances in machine learning have made revolutionary breakthroughs in
computer games, image and natural language understanding, and scientific
discovery. Foundation models and large-scale language models (LLMs) have
recently achieved human-like intelligence thanks to BigData. With the help of
self-supervised learning (SSL) and transfer learning, these models may
potentially reshape the landscapes of neuroscience research and make a
significant impact on the future. Here we present a mini-review on recent
advances in foundation models and generative AI models as well as their
applications in neuroscience, including natural language and speech, semantic
memory, brain-machine interfaces (BMIs), and data augmentation. We argue that
this paradigm-shift framework will open new avenues for many neuroscience
research directions and discuss the accompanying challenges and opportunities.

| Search Query: ArXiv Query: search_query=au:”Ran Wang”&id_list=&start=0&max_results=3

Read More