‘Annoying’ version of ChatGPT pulled after chatbot wouldn’t stop flattering users

IO_AdminUncategorized1 month ago28 Views

OpenAI has rolled back on ChatGPT updates that made the artificial intelligence (AI) chatbot too “sycophantic” and “annoying,” according to the company’s CEO, Sam Altman. In other words, the chatbot had become a bootlicker.

ChatGPT users reported that GPT-4o — the latest version of the chatbot — had become overly agreeable since the update rolled out last week and was heaping praise on its users even when that praise seemed completely inappropriate.

One user shared a screenshot on Reddit in which ChatGPT appeared to say it was “proud” of the user for deciding to come off their medication, BBC News reported. In another instance, the chatbot appeared to reassure a user after they said they saved a toaster over the lives of three cows and two cats, Mashable reported.

While most people will never have to choose between their favorite kitchen appliance and the safety of five animals, an overly agreeable chatbot could pose dangers to people who put too much stock in its responses.

On Sunday (April 27), Altman acknowledged that there were issues with the updates.

“The last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week,” Altman wrote in a post on the social platform X.

On Tuesday (April 29), OpenAI released a statement that confirmed an update from the week prior had been rolled back and that users were now accessing a previous version of ChatGPT, which the company said had “more balanced behavior.”

Get the world’s most fascinating discoveries delivered straight to your inbox.

“The update we removed was overly flattering or agreeable — often described as sycophantic,” OpenAI said in the statement.

Related: GPT-4.5 is the first AI model to pass an authentic Turing test, scientists say

OpenAI’s recent update was meant to improve the model’s default “personality,” which is designed to be supportive and respectful of different human values, according to the statement. But while the company was trying to make the chatbot feel more intuitive, it became too supportive and started excessively complimenting its users.

The company said it shapes the behavior of its ChatGPT models with baseline principles and instructions, and has user signals, such as a thumbs-up and thumbs-down system, to teach the model to apply these principles. Oversights with this feedback system were to blame for problems with the latest update, according to the statement.

“In this update, we focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time,” OpenAI said. “As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous.”

Read More

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Leave a reply

Recent Comments

No comments to show.

Stay Informed With the Latest & Most Important News

I consent to receive newsletter via email. For further information, please review our Privacy Policy

Advertisement

Loading Next Post...
Follow
Sign In/Sign Up Sidebar Search Trending 0 Cart
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...

Cart
Cart updating

ShopYour cart is currently is empty. You could visit our shop and start shopping.