Thursday, January 15, 2026
ad
Home Blog Page 44

OpenAI Shuts Down Its AI Detection Tool AI Classifier

A tool that could recognise whether a piece of content was produced using generative AI tools like ChatGPT was unveiled by AI company OpenAI in January. It could have been a massive help, or at the very least keep professors and teachers sane.

That tool is no longer usable, having failed to fulfill its purpose half a year later. The company that created ChatGPT, OpenAI, discreetly discontinued their AI detection tool, AI Classifier, last week because of “its low rate of accuracy,” the company claimed. 

The justification wasn’t included in a fresh announcement. Rrather, it was included in a note to the blog post that had previously introduced the tool. There is no longer a link to the classifier from OpenAI.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

“We have committed to developing and deploying methods that enable consumers to determine whether audio or visual content is AI-generated, and we are trying to incorporate feedback and investigate more efficient provenance ways for text,” said OpenAI.

A cottage industry of AI detectors has been developed as a result of the nearly daily arrival of new technologies permitting the usage of increasingly complex AI. 

OpenAI announced the release of their AI Classifier, saying it could tell apart text written by a human from material written by an AI. Still, OpenAI described the classifier as “not fully reliable,” noting that evaluations on a challenge set of English texts properly labeled 26% of AI-written text as “likely AI-written” while mistakenly classifying the human-written content as AI-written 9% of the time.

Advertisement

OpenAI’s Head of Trust and Safety Dave Willner Resigns

OpenAI's head of trust and safety Dave Willner resigns
Image Credits: Deccan Herald

Dave Willner, the head of trust and safety at OpenAI, resigned from his post to spend more time with his family. He disclosed the same in a post on LinkedIn. Willner posted on LinkedIn that he has some “bittersweet news” to offer, as he will no longer be working for OpenAI and will instead transition into an advisory role.

He continued by expressing his pride in his team and how far it has come since he joined. He continued by saying that he and his wife, who is employed with OpenAI, had made it quite clear that nothing comes before family. 

Willner noted that due to the additional workload following the ChatGPT launch, he is having trouble upholding his side of the contract. He indicated that he will be assisting early-stage businesses so that he can spend more time with his family, when discussing his next course of action.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

When discussing his desire to put his family first, Willner said, “OpenAI is going through a high-intensity phase in its development and so are our kids. I want to prioritize my kids over work. That tension can be felt by anyone who has small children and a demanding career, and for me, it became clear over the past few months that I would have to choose between the two.”

Willner is not the only technologist who values work-life balance more than a big salary. Techies are willing to take lower income if it means a better living, according to a survey by job posting website Blind.

Advertisement

DishBrain Receives $600,000 Research Grant for AI and Human Brain Cells Fusion 

DishBrain receives $600,000 research grant for AI human brain cells fusion
Image Credits: DHL

Defense and the Office of National Intelligence (ONI) have together awarded a $600,000 grant for research on the fusion of artificial intelligence and human brain cells.

“The work merges the fields of artificial intelligence and synthetic biology to create programmable biological computing platforms,” said associate professor Adeel Razi from the university’s Turner Institute for Brain and Mental Health.

DishBrain, brain cells capable of playing the classic video game Pong, were developed by the research team, which is composed of Monash University and Cortical Labs. Thousands of thousands of living, lab-grown brain cells are trained to perform various activities, including playing Pong. Electrical activity in a multi-electrode array informs the cells when the paddle is striking the ball and provides feedback to the cells.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

The goal of the DishBrain study is to comprehend the biological processes behind continuous learning. With the help of this grant from the National Intelligence and Security Discovery Research Agency, Razi stated, they will build better AI systems that mimic the learning capabilities of these biological neural networks.

In a publication from science journal Neuron, the researchers said that a synthetic biological intelligence that was previously confined to the area of science fiction will now be feasible.

The National Security Science and Technology Centre grant was awarded to the researchers, according to Razi, because a new kind of artificial intelligence that can learn throughout its lifetime was required now more than ever. According to him, such intelligence will assist machine learning for technologies such as autonomous drones, delivery robots, and self-driving automobiles.

Advertisement

Zoho CEO Says AI Will Replace Roles, Not Employees

Zoho CEO says AI will replace roles, not employees
Image Credits: CIO

Sridhar Vembu, co-founder and CEO of Indian SaaS giant Zoho, stated on 21 July that artificial intelligence will only replace roles, not employees, amid discussions about how it will replace jobs.

During his keynote address to CIOs from various industries at an event organized by ManageEngine and Zoho in Chennai, Vembu said, “Language models are generating human-sounding, plausible text but it can be a fiction and it is a problem. At Zoho, we believe that AI can only replace roles but people will still matter. It reflects an organization’s philosophy.”  

This occurs at a time when businesses are increasingly attempting to automate numerous services using AI. Suumit Shah, a co-founder of Dukaan, said that the e-commerce SaaS company has let go of more than 90% of its customer care staff

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

Vembu from Zoho also mentioned that the initial AI frenzy has subsided and that the technology is now developing to be more advantageous for businesses. “The crest of the current wave has been reached. We are currently engaged in a grueling, lengthy process to make technology helpful for businesses,” he said.

Vembu also discussed a poll that was carried out among roughly 8,000 people who are a part of the Zoho ecosystem. 

“We observed that around 52% of users use ChatGPT regularly, 30% of users used it in past, although utilization is declining, and 15% of users have completely ceased using it. I fall under the bracket of 30%,” he said. More significant use cases are starting to emerge as the initial usage frenzy declines, according to the CEO. 

Advertisement

Seven AI Tech Companies Agree to Safeguards in the US

Seven AI tech companies agree to safeguards in the US
Image Credits: MSNBC

The White House stated on Friday that seven major US AI companies have voluntarily agreed to safeguards on the technology’s advancement. They also promised to minimize the dangers associated with the new capabilities while vying for the potential of AI.

At a meeting with President Joe Biden at the White House on Friday afternoon, the seven tech giants—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—will formally declare their dedication to new standards in the areas of safety, security, and trust.

The announcement comes as the businesses compete to develop the most advanced versions of AI that provide potent novel techniques to create writing, images, music, and video without the involvement of humans. 

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

However, as self-aware computers develop, there are growing concerns about the dissemination of misinformation and apocalyptic statements about the “risk of extinction” as a result of technological advances.

As Washington and governments all over the world scramble to put legal and regulatory frameworks for AI in place, the voluntary safeguards are merely a first, hesitant step. Even while politicians have failed to govern social media and other technologies, they show a need for the Biden administration and lawmakers to react to the constantly growing technology.

Advertisement

Singapore Releases Draft Guidelines on Using Personal Data for AI Training

Singapore draft guidelines using personal data AI training
Image Credits: TVX

In order to safeguard personal data when it is used to develop artificial intelligence models and systems, Singapore recently announced a draft of guidelines. The document describes how the Personal Data Protection Act (PDPA) of the nation will apply in these circumstances.

The objectives of the rules are to guarantee responsibility and transparency in the use of personal data for AI training. The guidelines offer firms best practices for establishing transparency around the use of personal data by AI systems in making judgements, predictions, and recommendations.

According to the guidelines, when gathering personal data for AI systems, the principles cover consent, accountability, and notification responsibilities. They also identify two exceptions: business development and research.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

The guidelines advise performing an impact analysis on data protection for AI systems that use personal information. The effectiveness of risk reduction and data remediation measures is evaluated by this assessment.

When creating, educating, and maintaining AI systems, organizations should implement the necessary technological procedures and legal safeguards to ensure data safety. In order to train and enhance the AI system, they should also practice minimizing data, employing only the essential personal data features.

By August 31st, the Personal Data Protection Commission (PDPC) is asking the public for suggestions and comments on the draft rules.

Advertisement

GitHub Makes Copilot Chat Feature Available in Public Beta

GitHub Copilot Chat feature available in public beta
Image Credits: GitHub

Copilot Chat, a ChatGPT-like tool designed to assist developers with coding, has been made accessible as a limited public beta for enterprise businesses and organizations, according to GitHub. The Copilot Chat beta, according to GitHub, will be accessible to “all business users” via the Visual Studio and Visual Studio Code apps from Microsoft.

As the primary feature of GitHub’s Copilot X programme, an augmentation of its original Copilot code completion tool that linked with OpenAI’s GPT-4 model, the chatbot was first introduced back in March. The tool aims to save developers time by allowing them to execute some of the most complex tasks with simple prompts.

According to GitHub, Copilot Chat can provide the most pertinent assistance in a developer-specific environment since it is contextually aware of the code being put into the code editor and any error alerts. GitHub Copilot Chat’s key features include “simple troubleshooting” to find possible problems, real-time help suited to particular coding projects, and coding analysis that clarifies code suggestions and difficult coding ideas.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

Mario Rodriguez, GitHub’s vice president of product, said, the programme will increase productivity tenfold and enable even inexperienced coders to build entire applications or debug vast arrays of code in a matter of minutes instead of a matter of days. “This means 10 days of work, done in one day,” Rodriguez remarked. 

The new Copilot X system from GitHub is being built with more features than just Copilot Chat. For instance, the business is working to integrate its “Hey, GitHub!” voice-to-code interactions into the programme. The release schedule for the other Copilot X features has not been disclosed, according to GitHub. 

Advertisement

OpenAI Introduces Custom Instructions Feature for Enhanced ChatGPT Experience 

OpenAI custom instructions feature
Image credits: AD

To provide users more control over ChatGPT’s responses, OpenAI has unveiled a new feature dubbed “custom instructions.” Users will be able to interact with ChatGPT more effectively and easily by using fewer prompts, thanks to the addition of custom instructions. 

ChatGPT will now be able to remember your conversation context based on your chosen preferences, enabling a more customized and tailored AI interaction experience.

For instance, while asking for suggestions, a developer may describe their preferred programming language or a teacher might mention that they are teaching maths to sixth graders. Users can also choose the size of their family, which enables ChatGPT to offer apt suggestions for meals, grocery shopping, and travel arrangements that are catered to their particular requirements.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

Users of the Plus plan will get access to the beta version of this function as of now, and over the ensuing weeks, it will gradually be made available to all users. 

The timing of this update and OpenAI’s decision to double the maximum amount of messages that ChatGPT Plus subscribers can send to GPT-4 at any given time are interestingly congruent. Users will be able to send up to 50 messages every three hours starting the next week, substantially enhancing the potential for discussions powered by AI.

Notably, the business announced that it would employ customized instructions to enhance user experience with the model. However, “you can disable this via data controls”, the company added. 

Advertisement

Google Tests AI tool Genesis That Can Write News Articles

Google tests AI tool Genesis that can write news articles
Image credits: AD

The newest example of how artificial intelligence technology has the potential to change white-collar jobs is that Google is creating an application that can compose news articles. The tool was presented as a “helpmate” to News Corp, Wall Street Journal, the New York Times, and the Washington Post.

Google claimed to be just beginning its research of the AI technology, which it said may provide journalists suggestions for headlines or other writing approaches. It emphasized that journalists were not to be replaced by technology.

It stated, “These tools are not meant to take the position of the reporting, article development, and fact-checking functions performed by journalists. Our intention is to provide journalists with the option to utilize these cutting-edge tools in a way that improves their output, much as how we already provide people with assistive capabilities like Gmail and Google Docs.”

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

When they read the pitch, two New York Times executives complained that it “seemed to take for granted the effort that went into producing accurate and artistic news stories.”

According to a source familiar with the initiative, Google regarded the tool as a chance to steer the publishing industry away from the pitfalls of generative AI and that it would act as a personal assistant for journalists by automating some activities.

The news comes after an agreement between OpenAI and the Associated Press allowed the ChatGPT creator to use the news organization’s database of stories for the purpose of training its AI models, which trains on a significant amount of data in order to generate credible responses.

Advertisement

Chinese OpenAI Competitor Zhipu AI Gets Meituan Funding

Chinese OpenAI competitor Zhipu AI gets Meituan funding
Image Credits: Zhipu

Meituan, a major Chinese food delivery company with a market cap of almost $100 billion as of this writing, has invested in Zhipu AI, one of the most promising OpenAI rivals in China.

According to business filings, a Meituan subsidiary that was already a shareholder in a Zhipu AI affiliate was recently added as a shareholder and now holds 10% of the company. 

The company hasn’t specified how much money it has received to date. All that is known is that it raised “hundreds of million yuan” in a Series B investment last September. Qiming Venture Partners, Legend Capital, and Tsinghua Holdings are some of its financiers.

Read More: Vimeo Introduces AI-Powered Script Generator And Text-Based Video Editor

Many Chinese businesses are striving to create large language models (LLMs) that might compete with their Western counterparts. One such business, Zhipu AI, is academic in origin, having been established by the renowned Tsinghua University in China. Tang Jie, a professor in the university’s Department of Computer Science and Technology, is the startup’s founder and CEO.

Recently, Zhipu released the source code for its ChatGLM-6B multilingual (Chinese and English) conversational AI model. This model is trained on six billion parameters and boasts the ability to perform LLM inferences on a single consumer-grade graphics card.

A more reliable, all-purpose variation, the GLM-130B trained on 130 billion parameters, was also previously open-sourced by the company. A close beta version of ChatGLM, its user-facing chatbot application, is currently available, intended initially at academic and professional players.

Advertisement