Kavli Affiliate: Peter Ford
| First 5 Authors: Grgur Kovač, Rémy Portelas, Masataka Sawayama, Peter Ford Dominey, Pierre-Yves Oudeyer
| Summary:
The standard way to study Large Language Models (LLMs) with benchmarks or
psychology questionnaires is to provide many different queries from similar
minimal contexts (e.g. multiple choice questions). However, due to LLMs’ highly
context-dependent nature, conclusions from such minimal-context evaluations may
be little informative about the model’s behavior in deployment (where it will
be exposed to many new contexts). We argue that context-dependence
(specifically, value stability) should be studied as a specific property of
LLMs and used as another dimension of LLM comparison (alongside others such as
cognitive abilities, knowledge, or model size). We present a case-study on the
stability of value expression over different contexts (simulated conversations
on different topics) as measured using a standard psychology questionnaire
(PVQ) and on behavioral downstream tasks. Reusing methods from psychology, we
study Rank-order stability on the population (interpersonal) level, and
Ipsative stability on the individual (intrapersonal) level. We consider two
settings (with and without instructing LLMs to simulate particular personas),
two simulated populations, and three downstream tasks. We observe consistent
trends in the stability of models and model families – Mixtral, Mistral,
GPT-3.5 and Qwen families are more stable than LLaMa-2 and Phi. The consistency
of these trends implies that some models exhibit higher value stability than
others, and that stability can be estimated with the set of introduced
methodological tools. When instructed to simulate particular personas, LLMs
exhibit low Rank-order stability, which further diminishes with conversation
length. This highlights the need for future research on LLMs that coherently
simulate different personas. This paper provides a foundational step in that
direction, and, to our knowledge, it is the first study of value stability in
LLMs.
| Search Query: ArXiv Query: search_query=au:”Peter Ford”&id_list=&start=0&max_results=3