Kavli Affiliate: Peter Ford | First 5 Authors: Grgur Kovač, Grgur Kovač, , , | Summary: Large language models (LLMs) are increasingly used in the creation of online content, creating feedback loops as subsequent generations of models will be trained on this synthetic data. Such loops were shown to lead to distribution shifts – models […]
Continue.. Recursive Training Loops in LLMs: How training data properties modulate distribution shift in generated data?