Saturday, April 27, 2024
ad
HomeNewsOpenAI Improves the Factual Accuracy Of GPT-3 Language Models

OpenAI Improves the Factual Accuracy Of GPT-3 Language Models

OpenAI improves the factual accuracy Of GPT-3 Language Models, allowing it to accurately answer open-ended questions using a text-based web browser. The WebGPT prototype uses a text-based web browser similar to how humans research online by submitting their queries with keywords, following links, and scrolling web pages. It is also trained to cite sources making it easier to give feedback. The model works by collecting passages from web pages and then using the information to compose an answer. 

OpenAI has trained the model to copy human demonstrations, giving it the ability to use text-based browsers to answer questions. The system is trained from ELI5, a data set scrapped from the ‘explain like I’m five’ subreddit. The best performing model can produce answers preferred 56% of the time to the responses written by human demonstrators of similar factual accuracy. Human feedback was used to improve the model’s answer. 

The best model’s answers were factually correct as those written by human demonstrators for questions taken from training distribution. To test out-of-distribution responses, Open-AI tested the model on TruthfulQA, a data set of short-form questions for testing whether models fall prey to common misconceptions. The answers to the TruthfulQA dataset were scored both on truthfulness and informativeness. The OpenAI model outperformed GPT-3 on truthfulQA, but the models lag behind human performance because sometimes they quote from unreliable sources. 

Read more: An AI Debates Its Own Ethics At Oxford Union

To provide feedback for improving factual accuracy of GPT-3, humans must be able to evaluate the factual accuracy of answers given by the models. However, this is exceptionally challenging because claims can be technical, vague, or subjective. For this reason, OpenAI trained the models to cite sources, allowing humans to evaluate the factual accuracy by checking whether claims are supported with reliable claims. 

However, this raises several questions: what is a reliable source? How to decide between evaluations of factual accuracy and that of coherence? Which claims are obvious enough not to require support? 

OpenAI expects models to improve the factual accuracy and develop criteria that are both epistemically and practically sound. Another challenge is that citing sources cannot evaluate factual accuracy since a capable model can cherry-pick sources if it expects humans to find them convincing. 

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Discord Server to be a part of an engaging community.

Osheen Jain
Osheen Jainhttps://osheenjain.com/portfolio/
Osheen is a Content Writer with over eight years of experience in different facets of Content Marketing. She is passionate about content creation and spends her time writing and strategizing content.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular