Top Management College in Kolkata | PGDM College in India Praxis

Part 1: Into the Echo Chamber

As Artificial Intelligence increasingly trains on its own output, the tech world faces a unique predicament – a potential ‘model collapse’. How severe is this, and what might the implications be?

What happens when Generative Artificial Intelligence, that has fired our imagination since the launch of ChatGPT on November 30, 2023, by OpenAI, powered Large Language Models (LLM), which have now been trained on human generated content/data, begins to train themselves on content/data generated by their other Generative AI models? Researchers found that it triggered a ‘model collapse’, a degenerative process affecting generations of learned generative models, where generated data ends up polluting the training set of the next generation of models; being trained on polluted data, they then mis-perceive reality.

Like begets like, or does it?

So far, the information used to educate the LLMs and other transformer models that bolster products like ChatGPT, Stable Diffusion and Midjourney originate from human-created sources such as books, articles, pictures and the like. Naturally, all these resources were developed without the assistance of AI. However, as a rising number of individuals utilise AI to generate and publish content, a pertinent query emerges: What’s the outcome when AI-produced content floods the internet, and AI models begin to learn from it – rather than from predominantly human-created content?

The development of LLMsis quite involved and requires masses of training data. Anecdotally, some powerful recent models are trained using scrapes of much of the Internet, then further fine-tuned with reinforcement learning from human feedback.  This further boosts the effective dataset size. Yet while current LLMs, including GPT-4, were trained on predominantly human-generated text, this may change in the future. If most future models’ training data is also scraped from the web, then they will inevitably come to train on data produced by their predecessors.

A team of investigators from the UK and Canada have delved into this issue and their discoveries suggest troubling implications for the existing generative AI technology and its prospects: “We discover that the utilisation of model-generated content in training leads to irreparable flaws in the ensuing models.”

The scientists considered two special cases: early model collapse and late model collapse. In early model collapse the model begins losing information about the tails of the distribution; in the late model collapse model entangles different modes of the original distributions and converges to a distribution that carries little resemblance to the original one, often with very small variance. This process is different from the process of catastrophic forgetting in which our models do not forget previously learned data, but rather start misinterpreting what they believe to be real, by reinforcing their own beliefs.

 

The Curse of Recursion

A paper titled “The Curse of Recursion: Training on Generated Data Makes Models Forget,”explore this phenomenon. It is jointly authored by six researchers from the University of Oxford, University of Cambridge, Imperial College London, University of Toronto, Vector Institute, and University of Edinburgh.

The curse, as they call it,arises when AI models are trained on data generated by their previous iterations, leading to a loss of information about the original data distribution. This phenomenon isn’t merely an academic concern; it has real-world implications that extend to areas such as content moderation, translation services, customer support, and more. When an AI system that’s supposed to moderate content or answer customer queries is predominantly trained on AI-generated content, it can lead to a decline in performance, making it less reliable and effective.

The research team, delves into this complex issue, demonstrating its occurrence in various types of models, including Variational Autoencoders, Gaussian Mixture Models, and Large Language Models (LLMs). Researchers found that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. They refer to this effect as “model collapse”.

The AI Feedback Loop

Essentially, the AI feedback loop or the cascade effect refers to AI models being trained on content primarily generated by other AI models. This recursive procedure has a high risk of diluting the diversity of information the models are learning from, eventually leading to what researchers ominously term as a ‘model collapse’. In other words, AI starts mirroring itself rather than the richness and complexity of human inputs.

One of the authors of the paper, Ross Anderson, Professor of Security Engineering at Cambridge University and the University of Edinburgh, wrote in a blog post:

“Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah. This will make it harder to train newer models by scraping the web, giving an advantage to firms which already did that, or which control access to human interfaces at scale. Indeed, we already see AI start-ups hammering the Internet Archive for training data.”

Understanding Model Collapse

Model collapse is a degenerative process that affects generations of learned generative models. In its simplest sense, it is a form of AI myopia, where the learning apparatus narrows down to a limited spectrum, thereby reducing the breadth and depth of its knowledge base.As models are trained on data produced by their predecessors, they begin to misinterpret reality, reinforcing their own beliefs and losing sight of the original data distribution. They risk turning into veritable echo chambers – producing repetitive, nonsensical, or even harmful outputs. Further, they may lose the ability to generate unique, creative, and contextually accurate responses.This process can lead to a significant loss of information, particularly about the tails of the distribution, and can eventually result in the model converging to a distribution that bears little resemblance to the original one.

[To be concluded]

 

Leave a Reply

Your email address will not be published. Required fields are marked *