OpenAI has announced the establishment of a new team dedicated to addressing and minimizing the “catastrophic risks” associated with artificial intelligence.
In their recent update on Thursday, Open AI revealed that this preparedness team’s mission is to diligently monitor, assess, predict, and safeguard against significant challenges arising from AI, which may include threats as those related to nuclear technology.
Additionally, the team will also focus on reducing the potential dangers stemming from “chemical, biological, and radiological threats,” as well as preventing “autonomous replication,” which involves AI independently creating copies of itself. The team will also address risks related to AI’s capacity to deceive humans and tackle issues concerning cybersecurity threats.
Aleksander Madry, currently on leave as the director of MIT’s Centre for Deployable Machine Learning, has been appointed to lead the preparedness team.
OpenAI has underscored that this team will additionally be responsible for crafting and upholding a “risk-informed development policy,” delineating Open AI’s actions for assessing and overseeing AI models.