The Temporal Structure of Language Processing in the Human Brain Corresponds to The Layered Hierarchy of Deep Language Models

Kavli Affiliate: Michael Brenner

| First 5 Authors: Ariel Goldstein, Eric Ham, Mariano Schain, Samuel Nastase, Zaid Zada

| Summary:

Deep Language Models (DLMs) provide a novel computational paradigm for
understanding the mechanisms of natural language processing in the human brain.
Unlike traditional psycholinguistic models, DLMs use layered sequences of
continuous numerical vectors to represent words and context, allowing a
plethora of emerging applications such as human-like text generation. In this
paper we show evidence that the layered hierarchy of DLMs may be used to model
the temporal dynamics of language comprehension in the brain by demonstrating a
strong correlation between DLM layer depth and the time at which layers are
most predictive of the human brain. Our ability to temporally resolve
individual layers benefits from our use of electrocorticography (ECoG) data,
which has a much higher temporal resolution than noninvasive methods like fMRI.
Using ECoG, we record neural activity from participants listening to a
30-minute narrative while also feeding the same narrative to a high-performing
DLM (GPT2-XL). We then extract contextual embeddings from the different layers
of the DLM and use linear encoding models to predict neural activity. We first
focus on the Inferior Frontal Gyrus (IFG, or Broca’s area) and then extend our
model to track the increasing temporal receptive window along the linguistic
processing hierarchy from auditory to syntactic and semantic areas. Our results
reveal a connection between human language processing and DLMs, with the DLM’s
layer-by-layer accumulation of contextual information mirroring the timing of
neural activity in high-order language areas.

| Search Query: ArXiv Query: search_query=au:”Michael Brenner”&id_list=&start=0&max_results=3

Read More