
OpenAI’s new GPT-5 Large Language Model was one of the most hyped and hotly anticipated product announcements of 2025.
But when it finally arrived to rapturous excitement, it was…kinda meh.
The system seemed to lack the personality and understanding of human emotion that was present in GPT-4o, and that caused some people to form genuinely friendships with that model.
Commentators at the time speculated that GPT-5’s stunted abilities might be part of a cost-cutting measure. But I had a different theory.
I suspected that OpenAI had given the new model an emotional lobotomy in order to avoid the risk of “AI psychosis”, for which it faced major lawsuits and public anger over the Summer.
Turns out, I was right! In a new post on X, OpenAI’s CEO Sam Altman acknowledged that “We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.”
Okay, so the wording here is more than a little cringe, and implies that Altman knows very little about mental health — dividing users into binary categories of “Mental Health Issues” and “No Mental Health Issues” ignores the fact that mental health often lies on a spectrum.
Still, the core point is exactly what I predicted — OpenAI was afraid they would be blamed for making peoples’ mental health issues worse, so they crippled the bot for everyone, on purpose.
If you grew attached to GPT-4o, or just want a writing and brainstorming assistant that actually has some foggy clue of how humans work — there’s good news.
Altman says that “In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).”
OpenAI is also rolling out age-gating which will allow other — ahem — “creative” uses of the technology. Read the end of his X post to see what I mean on that one…
Open Lines
I can’t help but think that life would have been much easier for OpenAI and Altman if they simply said they were scaling back GPT-5’s emotional brain for safety reasons when they first launched it.
Instead, they rolled out a crippled bot, likely hoping no one would notice. Of course, people did notice, and GPT-5 quickly developed a reputation as an inferior version of earlier models, and a big letdown. If OpenAI was a public company, its stock surely would have tanked.
As I’ve tested GPT-5 extensively in the real world, I’ve found that the bot is indeed much better than earlier versions at certain important things — namely, presenting accurate facts with minimal hallucinations. It’s also a beast at coding.
Adding back in some emotional intelligence will make GPT-5 that much better, and certainly a big step forward versus GPT-4o.
That will be a positive outcome for OpenAI. But they could have avoided much of the launch-related headaches by simply saying upfront “The new model’s emotional side is still a work in progress! Please bear with us as we work to balance safety and performance.”
Ironically, OpenAI’s communications here show a distinct lack of emotional intelligence. Perhaps GPT-5 Version 1 wrote their Go to Market plan!