Researchers warn of ‘catastrophic overtraining’ in Large Language Models

0


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

A new academic study challenges a core assumption in the development of large language models (LLMs), warning that more pre-training data may not always lead to better models.

Researchers from some of the leading computer science institutions in the West and around the world — including Carnegie Mellon University, Stanford University, Harvard University, and Princeton University — have introduced the concept of “Catastrophic Overtraining,” showing that extended pre-training can actually make language models harder to fine-tune, ultimately degrading their performance.

The study, titled “Overtrained Language Models Are Harder to Fine-Tune”, is available on arXiv and led by Jacob Mitchell Springer, along with co-authors Sachin Goyal, Kaiyue Wen, Tanishq Kumar, Xiang Yue, Sadhika Malladi, Graham Neubig, and Aditi Raghunathan.

The law of diminishing returns

The research focuses on a surprising trend observed in modern LLM development: while models are pre-trained on ever expanding pools of data — licensed or scraped from the web, represented to an LLM as a series of tokens, or numerical representations of concepts and ideas — this practice of increasing the token number during pre-training may lead to reduced effectiveness when those models are later fine-tuned for specific tasks.

The team conducted a series of empirical evaluations and theoretical analyses to examine the effect of extended pre-training on model adaptability.

One of the key findings centers on AI2’s open source OLMo-1B model.

The researchers compared two versions of this model: one pre-trained on 2.3 trillion tokens and another on 3 trillion tokens.

Despite the latter being trained on 30% more data, the latter model performed worse after instruction tuning. Specifically, the 3T-token model showed over 2% worse performance on several standard language model benchmarks compared to its 2.3T-token counterpart. In some evaluations, the degradation in performance reached up to 3%.

This decline, the researchers argue, is not an anomaly but rather a consistent phenomenon they term “Catastrophic Overtraining.”

Understanding sensitivity and forgetting

The paper attributes this degradation to a systematic increase in what they call “progressive sensitivity.” As models undergo extended pre-training, their parameters become more sensitive to changes.

This increased fragility makes them more vulnerable to degradation during post-training modifications such as instruction tuning, fine-tuning for multimodal tasks, or even simple weight perturbations.

The researchers provide evidence that, beyond a certain point in pre-training, any modification—whether structured like fine-tuning or unstructured like adding Gaussian noise—leads to a greater loss of previously learned capabilities.

This sensitivity results in “forgetting,” where the model’s original strengths deteriorate as new training data is introduced.

The study identifies an “inflection point” in pre-training, after which additional training leads to diminishing and even negative returns when it comes to fine-tuning outcomes. For the OLMo-1B model, this threshold emerged around 2.5 trillion tokens.

A wealth of evidence

The team’s analysis spans both real-world and controlled experimental settings. They tested the phenomenon across different tasks, including instruction tuning using datasets like Anthropic-HH and TULU, as well as multimodal fine-tuning using the LLaVA framework.

The results consistently showed that models pre-trained beyond certain token budgets underperformed after fine-tuning.

Furthermore, the researchers constructed a theoretical model using linear networks to better understand why overtraining leads to increased sensitivity.

Their analysis confirmed that progressive sensitivity and catastrophic overtraining are mathematically inevitable when pre-training continues indefinitely without proper constraints.

The ultimate takeaway? Model providers and trainers must make trade-offs

The findings challenge the widespread assumption that more pre-training data is always better. Instead, the paper suggests a nuanced trade-off: while longer pre-training improves the base model’s capabilities, it also increases the risk that fine-tuning will degrade those capabilities.

In practice, attempts to mitigate this effect—such as adjusting fine-tuning learning rates or adding regularization—may delay the onset of catastrophic overtraining but cannot fully eliminate it without sacrificing downstream performance.

Thus, for enterprises looking to leverage LLMs to improve business workflows and outcomes, if one idea for doing so is to fine-tune an open source model, the lesson from this research indicates fine-tuning lower parameter models trained on less material is likely to arrive at a more reliable production model.

The authors acknowledge that further research is needed to understand the factors that influence when and how catastrophic overtraining occurs. Open questions include whether the pre-training optimizer, training objective, or data distribution can impact the severity of the phenomenon.

Implications for future LLM and AI model development

The study has significant implications for how organizations and researchers design and train large language models. As the field continues to pursue larger and more capable models, this research highlights the importance of balancing pre-training duration with post-training adaptability.

Additionally, the findings may influence how model developers think about resource allocation. Rather than focusing exclusively on increasing pre-training budgets, developers may need to reassess strategies to optimize downstream performance without incurring the negative effects of catastrophic overtraining.



Source link

You might also like
Leave A Reply

Your email address will not be published.