Omnigrok: Grokking Beyond Algorithmic Data

Kavli Affiliate: Max Tegmark

| First 5 Authors: Ziming Liu, Eric J. Michaud, Max Tegmark, ,

| Summary:

Grokking, the unusual phenomenon for algorithmic datasets where
generalization happens long after overfitting the training data, has remained
elusive. We aim to understand grokking by analyzing the loss landscapes of
neural networks, identifying the mismatch between training and test losses as
the cause for grokking. We refer to this as the "LU mechanism" because training
and test losses (against model weight norm) typically resemble "L" and "U",
respectively. This simple mechanism can nicely explain many aspects of
grokking: data size dependence, weight decay dependence, the emergence of
representations, etc. Guided by the intuitive picture, we are able to induce
grokking on tasks involving images, language and molecules. In the reverse
direction, we are able to eliminate grokking for algorithmic datasets. We
attribute the dramatic nature of grokking for algorithmic datasets to
representation learning.

| Search Query: ArXiv Query: search_query=au:”Max Tegmark”&id_list=&start=0&max_results=10

Read More