OpenAI is currently investigating reports regarding user dissatisfaction with the latest iteration of ChatGPT, based on the GPT-4 model. Users have voiced concerns that the chatbot appears uncooperative and lazy in addressing their queries.
For example, when asked for a piece of code, ChatGPT might provide minimal information and prompt users to complete the task themselves. Some users found its responses to be rather sassy, with ChatGPT implying that users are fully capable of completing the task on their own.
As user complaints kept on rising on different social media platforms, ChatGPT creators OpenAI on X (formerly Twitter) through the ChatGPT account said, “We’ve heard all your feedback about GPT4 getting lazier! we haven’t updated the model since Nov 11th, and this certainly isn’t intentional. Model behavior can be unpredictable, and we’re looking into fixing it.”
The response wasn’t convincing enough for people, as one user on X replied to that same thread where ChatGPT posted their “awareness of the issue” tweet. The user pointed out one important factor: even if the model hasn’t been updated, how can it change or get lazy when it’s just a file?
Although OpenAI did not succinctly point out the reason for ChatGPT’s laziness, they did reply to the user, saying, “To be clear, the idea is not that the model has somehow changed itself since Nov 11th. It’s just that differences in model behavior can be subtle — only a subset of prompts may be degraded, and it may take a long time for customers and employees to notice and fix these patterns.”
However, user complaints about ChatGPT’s declining accuracy aren’t a recent occurrence. Reports of it being dumber and inaccurate date back as far as six months ago, which OpenAI refuted, unlike this time.
What happens next is to be seen. As the AI race is getting heated up with more and more highly efficient LLM models like Grok and Gemini-powered Bard catching up with ChatGPT, a slight lack of focus from OpenAI might cost them a lot of users as choices are in abundance.