Kavli Affiliate: Peter Ford | First 5 Authors: Grgur Kovač, Jérémy Perez, Rémy Portelas, Peter Ford Dominey, Pierre-Yves Oudeyer | Summary: Large language models (LLMs) are increasingly contributing to the creation of content on the Internet. This creates a feedback loop as subsequent generations of models will be trained on this generated, synthetic data. This […]
Continue.. Recursive Training Loops in LLMs: How training data properties modulate distribution shift in generated data?