Large Language Models as Superpositions of Cultural Perspectives

Kavli Affiliate: Peter Ford

| First 5 Authors: Grgur Kovač, Masataka Sawayama, Rémy Portelas, Cédric Colas, Peter Ford Dominey

| Summary:

Large Language Models (LLMs) are often misleadingly recognized as having a
personality or a set of values. We argue that an LLM can be seen as a
superposition of perspectives with different values and personality traits.
LLMs exhibit context-dependent values and personality traits that change based
on the induced perspective (as opposed to humans, who tend to have more
coherent values and personality traits across contexts). We introduce the
concept of perspective controllability, which refers to a model’s affordance to
adopt various perspectives with differing values and personality traits. In our
experiments, we use questionnaires from psychology (PVQ, VSM, IPIP) to study
how exhibited values and personality traits change based on different
perspectives. Through qualitative experiments, we show that LLMs express
different values when those are (implicitly or explicitly) implied in the
prompt, and that LLMs express different values even when those are not
obviously implied (demonstrating their context-dependent nature). We then
conduct quantitative experiments to study the controllability of different
models (GPT-4, GPT-3.5, OpenAssistant, StableVicuna, StableLM), the
effectiveness of various methods for inducing perspectives, and the smoothness
of the models’ drivability. We conclude by examining the broader implications
of our work and outline a variety of associated scientific questions. The
project website is available at
https://sites.google.com/view/llm-superpositions .

| Search Query: ArXiv Query: search_query=au:”Peter Ford”&id_list=&start=0&max_results=3

Read More