“Look Ma, I shrunk Shakespeare!”

“Look Ma, I shrunk Shakespeare!”

Be ready for AI-driven culture-capsules, as an algorithm based on autoregressive language model capably summarises lengthy classics using deep learning methods

“…Juliet asks Romeo if he is Romeo, and if he is a Montague. Romeo says that he is neither. Juliet asks how Romeo got into the garden, and he says that he climbed over the walls with the help of love. Juliet says that if her kinsmen see him, they will kill him. Romeo says that he is safe because of the cloak of night. Juliet asks how Romeo found out about the place. Romeo says that love guided him. Juliet says that she is too fond, and that Romeo may think her behavior to be light…”

These 97 words above comprise a pithy synopsis of a chapter from Shakespeare’s timeless classic Romeo and Juliet, written by OpenAI’s latest algorithm using GPT-3 (Generative Pretrained Transformer), an autoregressive language model that uses deep learning to produce human-like text.

OpenAI, an artificial intelligence research laboratory of which Elon Musk is a co-founder, has developed an AI model that can summarise books of any length. A fine-tuned version of the research lab’s GPT-3, the model works by first summarising small sections of a book and then further shortening those summaries into higher-level summaries, following a paradigm OpenAI calls “recursive task decomposition.” Their portal defines it as a process to break up a difficult task into easier ones.Romeo and Juliets original text of 25433 words has been shrunk five times to 5809 in the summarized text by using this AI model.

The Approach

The organisation describes their summarising approach as follows:

“…we break up summarizing a long piece of text into summarizing several shorter pieces. Compared to an end-to-end training procedure, recursive task decomposition has the following advantages:

  • Decomposition allows humans to evaluate model summaries more quickly by using summaries of smaller parts of the book rather than reading the source text.
  • It is easier to trace the summary-writing process. For example, you can trace to find where in the original text certain events from the summary happen.
  • Our method can be used to summarize books of unbounded length, unrestricted by the context length of the transformer models we use.”

The Advantage

OpenAIdecided to use the recursive task decomposition model, as the research lab discovered that pre-trained models weren’t very good at summarising. In the past, it was found that training a model with reinforcement learning based on human feedback helped align model summaries with human preferences on short posts and articles. But direct summaries of entire books would take a lot of effort since a human would need to read the entire book, which involves many hours.

The stated mission of OpenAI“is to ensure that artificial general intelligence benefits all of humanity”. Summarising book-length documents could be valuable for enterprises, particularly for documentation-heavy industries like software development. A survey by SearchYourCloud found that workers take up to eight searches to find the right document, and McKinsey reports that employees spend 1.8 hours every day – adding up to 9.3 hours per week, on an average– merely searching and gathering work-related information.

The Algorithm

Explaining the rationale behind their summary algorithm, OpenAI writes in its portal:

“Our current approach is to empower humans to evaluate machine learning model outputs using assistance from other models. In this case, to evaluate book summaries we empower humans with individual chapter summaries written by our model, which saves them time when evaluating these summaries relative to reading the source text. Our progress on book summarization is the first large-scale empirical work on scaling alignment techniques. Going forward, we are researching better ways to assist humans in evaluating model behavior, with the goal of finding techniques that scale to aligning artificial general intelligence.”

OpenAI trained the model on a subset of the books in GPT-3’s training dataset that were mostly of the fiction variety and contained over 100,000 words on average. To evaluate the model, researchers took the 40 most popular books published in 2020 and assigned two people to read each book and write a summary – and then to rate the summaries from both the model and each other.

The Next Level

However, OpenAI is not the first attempt in summarising lengthy content. We already have the ubiquitous Google exploring summarisation methods to generate abstract summaries of paragraphs –and Microsoft too. Reportedly, Facebook is also developing an AI tool that would summarise news articles, so that users don’t have to read lengthy originals.

OpenAI is candid about the shortcomings of the model and admits that while the model effectively created “book-level” digests containing much of the important information, it also sometimes generated inexact statements due to absence of context. Furthermore, the AI generated summaries at times read more like a list of events from the book rather than an intelligible summary. Task decomposition assumes that separate parts of a task can be completed independently, a rule that might not be applicable while summarising books. For instance, it might be hard for an algorithm to identify cases, where events described early on, are revealed to be important only later in the book, as is usually the case with mystery and detective plots.

The summary generator tool is yet to be released for public use – but, going by its progress, we expect to hear more exciting developments.

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us