A tool that could recognise whether a piece of content was produced using generative AI tools like ChatGPT was unveiled by AI company OpenAI in January. It could have been a massive help, or at the very least keep professors and teachers sane.
That tool is no longer usable, having failed to fulfill its purpose half a year later. The company that created ChatGPT, OpenAI, discreetly discontinued their AI detection tool, AI Classifier, last week because of “its low rate of accuracy,” the company claimed.
The justification wasn’t included in a fresh announcement. Rrather, it was included in a note to the blog post that had previously introduced the tool. There is no longer a link to the classifier from OpenAI.
Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor
“We have committed to developing and deploying methods that enable consumers to determine whether audio or visual content is AI-generated, and we are trying to incorporate feedback and investigate more efficient provenance ways for text,” said OpenAI.
A cottage industry of AI detectors has been developed as a result of the nearly daily arrival of new technologies permitting the usage of increasingly complex AI.
OpenAI announced the release of their AI Classifier, saying it could tell apart text written by a human from material written by an AI. Still, OpenAI described the classifier as “not fully reliable,” noting that evaluations on a challenge set of English texts properly labeled 26% of AI-written text as “likely AI-written” while mistakenly classifying the human-written content as AI-written 9% of the time.