Friday, November 22, 2024
ad
HomeData SciencePEER: a Collaborative Language Model

PEER: a Collaborative Language Model

Most existing language models are built and trained to provide textual outputs based on the initial inputs. These models efficiently output texts but are restricted to providing only left-to-right language modeling results by predicting the next word. They are not trained to perform specific tasks during the entire writing process, like updating existing texts (even the ones generated by them) and editing. Besides, conventional models are tough to control and cannot explain their actions. Consequently, they are insufficient for collaborative writing.

To address the insufficiencies, Meta AI Research is introducing PEER, a language model extended to enhance collaborative writing techniques by fragmenting a task into smaller subtasks. The PEER or Plan, Edit, Explain, and Repeat model is trained for the entire writing process, not just the final output. It can plan drafts, give suggestions, overlook edits, and explain editing actions, offering several upsides over the standard left-to-right language models. 

You need a proper dataset with an accessible history of edits for training a language model for multiple subtasks like the ones done by PEER. This is because the capability to suggest and explain edits sets PEER apart from other models. It is challenging to obtain edit histories via standard web crawls for most data sources, leading to data scarcity.

To overcome data scarcity, Meta has trained the model on Wikipedia edits to infill PEER with all processes and edit histories, making it applicable to several public domains for which edit histories are unavailable. Wikipedia provides a complete editing history with comments and citations on a large scale. 

While Wikipedia solves the data scarcity problem, it poses a few others, like noisy comments, lack of citations, and high specificity of data to Wikipedia’s textual content and edits. Meta has trained multiple PEER instances, not just one, to solve the above problems by generating synthetic data as a proxy for missing pieces. This synthetic data also replaces low-grade sections in the existing data.  

Read More: Virtual Assets Regulatory Authority welcomes Blockchain.com to operate in Dubai

Framework

The model’s core idea is to explain textual content editing as an iterative process that is repeated till the desired output is received. Let’s say you have a text sequence xₜ, a plan Pₜ, and a set of documents Dₜ containing necessary information. Based on xₜ, P and Dₜ, you can create a plan Pₜ₊₁ to give instructions about further modifications, like “fix spelling errors,” “simplify,” etc. After planning, the model repeats iterations wherein each iteration edits the text to give an updated version xₜ₊₁. Lastly, the model explains the intentions behind the edits via a textual explanation, eₜ, based on (xₜ, xₜ₊₁, and Dₜ). 

The entire process of planning, editing, and explaining is repeated numerous times to obtain a sequence of xₜ, xₜ₊₁, xₜ₊₂, and so on till any xₙ is the same as xₙ₋₁, or there are no more edits. 

Each step makes PEER highly relevant to collaborative writing, where dividing the entire process into phases enhances the quality and worthiness of the output. The explanation and planning phases might be similar given that the model explains what you (or the model) planned; the difference is when they happen. Planning happens before the model edits and explanations are provided in the end. 

Besides editing, PEER enables you to write tests from the beginning using an empty sequence x₀

Meta claims to have enhanced the quality and diversity of the plans, edits, and documents generated by PEER because of several mechanisms it has implemented. 

For quality

PEER prepends control tokens for the output sequences and uses these tokens to guide the model’s generations. Here are some examples:

  • Instruction: this token controls whether the document begins with a noun, verb, etc. 
  • Length as a proxy for the extent of details in explanations.
  • Word overlap to prevent overlapping words in the edit and explanation to ensure that the generated plans are not providing trivial edits by utilizing the specification inputs exactly. 
  • Number of words to control the difference in the number of words between xₜ₊₁ and xₜ.

For diversity

To evaluate PEER’s ability to improve diversity, Meta has trained the model to perform edits on provided documents across multiple domains, especially the ones without edit histories. PEER has a collection of naturally occurring edits for texts obtained from Wikipedia, Wikinews, and the subforums of StackExchange.

Limitations 

A significant drawback is that the model generated many false claims not backed by the documents. Generally, people rely on such results without explicit fact-checking. Since PEER represents edits by rewriting the entire paragraph, it is impossible to deal with lengthy documents in a time-efficient manner.

Moreover, the evaluation technique is limited as it only evaluates PEER and other similar models based on a small subset from only a few domains. The collaborative potential of PEER is also explored menially. Undertaking more extensive research on human-AI interactions would be challenging.

Nevertheless, the model is a significant step forward in improving collaborative writing. It can be actuated further to find more suitable ways to evaluate texts with human assistance and improve PEER’s efficiency in processing entire documents.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Disha Chopra
Disha Chopra
Disha Chopra is a content enthusiast! She is an Economics graduate pursuing her PG in the same field along with Data Sciences. Disha enjoys the ever-demanding world of content and the flexibility that comes with it. She can be found listening to music or simply asleep when not working!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular