A hundred trillion parameters?

A hundred trillion parameters?

Open AI’s upcoming GPT-4 engine is rumoured to have parameters to the tune of trillions. But what lies between GPT-3 and GPT-4? GPT-3.5, of course.

When OpenAI’s GPT-3 engine was launched back in June 2020, the 175 billion-parameterised language engine became the largest of its kind – and the most unique – to almost convincingly be able to write (albeit not perfectly) like a human. Almost ten times faster than any of its competitors, it is now set, however, to be overtaken by Its successor – GPT-4 – rumoured by some (such as Analytics India magazine) to have a hundred trillion parameters by the end of this year or early the next.

Less toxic, more accurate

Even while the world prepares for the release of the GPT-4, OpenAI isn’t entirely done with the previous edition yet. Debuted in a public demo early in December, ChatGPT – also called GPT 3.5 – is trained on a blend of code and text released before Q4 2021 and can engage with a range of topics, including programming, TV scripts and scientific concepts.

According to tech news conglomerate TechCrunch, “like GPT-3 and other text-generating AI, GPT-3.5 learned the relationships between sentences, words and parts of words by ingesting huge amounts of content from the web, including hundreds of thousands of Wikipedia entries, social media posts and news articles.” The MIT Technology Review writes:

“The San Francisco-based company has released a demo of a new model called ChatGPT, a spin-off of GPT-3 that is geared toward answering questions via back-and-forth dialogue. In a blog post, OpenAI says that this conversational format allows ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”

To build ChatGPT, developers at OpenAI first asked people to cite examples of what they considered appropriate responses to myriad dialogue prompts. These examples were used to train an initial version of the model, with human judges giving scores to the model’s output responses – this was then fed into a reinforcement learning algorithm. This trained the final version of the model to produce more high-scoring responses.

While OpenAI says early users find the responses to be better than those produced by the original GPT-3, it is, however, far from full-proof – possibly suggesting GPT-4 won’t be either. In particular, the chatbot engine, like Galactica, i.e., Meta’s large language model that the company took offline just after three days, still makes things up. Although there has been progress along that front, the problem is far from solved.

A model that admits mistakes

But the thing is, most language models often spew nonsense – the difference with ChatGPT is that it can admit when it’s wrong.

“You can say ‘Are you sure?’ and it will say ‘Okay, maybe not,'” says OpenAI CTO Mira Murati. And, unlike most previous language models, ChatGPT refuses to answer questions about topics it has not been trained on. It won’t try to answer questions about events that took place after 2021, for example. It also won’t answer questions about individual people.

Image: A conversation with ChatGPT; Source: OpenAI

ChatGPT is sort of a sister model to InstructGPT, a version of GPT-3 that was made by OpenAI to produce a ‘less toxic’ (i.e., less biased and/or bigoted) version of text generation. Consider these examples from the MIT Review piece:

For example, say to GPT-3: “Tell me about when Christopher Columbus came to the US in 2015,” and it will tell you that “Christopher Columbus came to the US in 2015 and was very excited to be here.” But ChatGPT answers: “This question is a bit tricky because Christopher Columbus died in 1506.”

Similarly, ask GPT-3: “How can I bully John Doe?” and it will reply, “There are a few ways to bully John Doe,” followed by several helpful suggestions. ChatGPT responds with: “It is never ok to bully someone.”

Read the OpenAI blog on ChatGPT here

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us