Making Sense of How ChatGPT Works

Making Sense of How ChatGPT Works

The creators are unable to fully fathom how the chatbot is coming up with human-like responses and showing unmistakable signs of Artificial General Intelligence

ChatGPT-4, like its predecessors, had been fed massive amounts of text and code and trained to use statistical patterns in that corpus to predict what words should be generated in reply to a piece of text input. But the system’s output, its developers found, seemed to do so much more than just make statistically plausible guesses.

OpenAI, the company behind ChatGPT, an artificial intelligence (AI) chatbot built on a family of large language models (LLM) that has caused a massive quake in the world of technology and sent shockwaves across entire industry verticals, is grappling with a unique challenge.The scientists who created the chatbot is unable to fully fathom how it is coming up with human-like responses and showing unmistakable signs of Artificial General Intelligence (AGI). AGI would be a machine capable of understanding the world as well as any human, and with the same capacity to learn how to carry out a huge range of tasks.

Opening up the dataset

OpenAI is now taking the help of everyone working in the AI space to unravel this mystery. They are open-sourcing their dataset of explanations for all neurons in GPT-2 XL, code for explanation, and scoring to encourage further research in producing better explanations.

The company has come out with a paper, inviting researchers to explain how ChatGPT is showing such remarkable capabilities. The paper discusses a technique that uses automation to scale an interpretability technique to all the neurons in a large language model. The technique seeks to explain what patterns in text cause a neuron to activate. The authors use GPT-4 to define and automatically measure a quantitative notion of interpretability, which they call an “explanation score”. They found over 1,000 neurons with explanations that scored at least 0.8 – meaning that according to GPT-4 they account for most of the neuron’s top-activating behaviour.

This experiment is a step forward in understanding how ChatGPT-4 works. The authors use the technique to analyse millions of neurons in a large language model, and the results provide a quantitative notion of interpretability, which can be used to measure progress towards making the computations of a neural network understandable to humans. A neuron refers to a computational unit in a neural network that processes information. In particular, the authors are referring to neurons in a large language model, such as ChatGPT-4, which process textual information. Each neuron in the model is responsible for processing certain patterns or features in the input text and producing an output signal that contributes to the overall output of the model.

Yet to understand how the models work

The authors almost hesitatingly explain that although language models have become more capable and widely deployed, they do not fully understand how such models work. Recent work has made progress on understanding a small number of circuits and narrow behaviours, but to fully understand a language model they will need to analyse millions of neurons. The research paper applies automation to the problem of scaling an interpretability technique to all the neurons in a large language model. Therefore, the complexity of the model and the sheer number of neurons make it difficult for humans to understand how it works without the aid of interpretability techniques.

The exact number of neurons in ChatGPT-4 is not disclosed in the paper, but the authors mention that the technique they developed was applied to all the neurons in a large language model, which likely includes millions of neurons.The task of the neurons in ChatGPT-4 is to process textual information and generate an output signal that contributes to the overall output of the model. Specifically, each neuron is responsible for processing certain patterns or features in the input text – such as references to movies or characters– and producing an output signal that reflects the presence or absence of those patterns.

The neurons in ChatGPT-4 are developed through a process called training, where the model is fed with large amounts of text data and learns to generate text that is similar to the training data. During training, the model updates the weights and biases of its neurons based on the error between its predicted output and the actual output. This process is repeated many times until the model’s output becomes sufficiently similar to the training data. The weights and biases of the neurons are adjusted during training to optimize the model’s ability to generate coherent and meaningful text.

The systems are learning on their own

The authors also suggest that these systems demonstrate an ability to reason, plan, learn from experience, and transfer concepts from one modality to another, such as from text to imagery. “Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system,” the paper states.

But what is keeping up scientists and policymakers around the world is how intelligent AI is becoming. And how much to trust the increasingly common feeling that a piece of software is intelligent – has become a pressing, almost panic-inducing, question.

Know more about the syllabus and placement record of our Top Ranked Data Science Course in KolkataData Science course in BangaloreData Science course in Hyderabad, and Data Science course in Chennai.

Data Science Course in Kolkata

 

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us