The UK’s largest news organization, BBC, and other top media organizations like CNN blocked OpenAI’s data scraping. As it evaluates the use of generative AI, BBC laid out principles it plans to follow, including for research and production of journalism, personalized experiences, and archiving.
Rhodri Talfan Davies, BBC Director of Nations, stated, “We believe Gen AI could provide a significant opportunity for the BBC to deepen and amplify our mission, enabling us to deliver more value to our audiences and to society.”
BBC’s three guiding principles are that it will always act in the public’s best interest, prioritize talent and creativity by respecting the rights of artists, and be open and transparent about AI-made output.
Read More: Deepmind Releases the Largest Robotics Datasets
In a bid to safely develop generative AI and focus on maintaining trust in the news industry, BBC said it will work with tech companies, other media organizations, and regulators.
Several top news publications like CNN, The New York Times, and the Australian Broadcasting Corporation (ABC) blocked Microsoft-backed OpenAI in August from accessing their content to train its AI models.
OpenAI’s web crawler—GPTBot—may scan web pages to improve its AI models. BBC has blocked web crawlers from OpenAI and Common Crawl, meaning these organizations can’t use content from the publication to train their AI models.
Davies emphasized that this move aims to safeguard the interests of the license fee payers and that unauthorized use of BBC data for training AI models isn’t in the public interest.
Additionally, BBC is examining the broader implications of Gen AI on the media industry, including the proliferation of misinformation and the potential effects on website traffic patterns. Davies stated, “In the next few months, we will start a number of projects that explore the use of Gen AI in both what we make and how we work – taking a targeted approach in order to better understand both the opportunities and risks.”