OpenAI Restructures ChatGPT Research Team to Refine AI Personality

OpenAI’s Big Shift: Personality Now a Core AI Priority

OpenAI is reshaping how its AI models behave. The company recently merged its Model Behavior team — the group responsible for shaping ChatGPT’s personality — into its Post Training division, signaling a major shift in focus.

The move highlights how AI personality, bias, and human-like interaction are now considered critical to the future of OpenAI’s models, including the widely used GPT-4o, GPT-4.5, and GPT-5.

This change also comes at a time when OpenAI faces growing scrutiny over how ChatGPT responds to sensitive conversations, political views, and emotional support queries.


Why OpenAI Restructured the Model Behavior Team

The Model Behavior team, made up of around 14 researchers, was created to ensure AI feels natural, helpful, and balanced during interactions.

The group worked on:

  • Reducing sycophancy (when AI agrees with everything a user says).
  • Addressing political bias in responses.
  • Helping define OpenAI’s stance on AI consciousness.
  • Shaping AI personalities to make models more engaging yet responsible.

With the reorganization, the team now reports to Max Schwarzer, who leads Post Training. According to OpenAI’s Chief Research Officer Mark Chen, this consolidation will bring personality work closer to the heart of AI model development.


Joanne Jang’s Next Chapter: OAI Labs

Joanne Jang, the founding leader of the Model Behavior team, is stepping away to launch a new initiative called OAI Labs.

Her new mission? To design and prototype fresh ways for humans to collaborate with AI, moving beyond the typical chat interface.

“I’m excited to explore patterns that move us beyond the chat paradigm,” Jang said. “AI should be tools for thinking, creating, learning, and connecting.”

For now, OAI Labs will report directly to Mark Chen. Jang has also confirmed she is open to exploring collaborations — even with Jony Ive, Apple’s former design chief, who is working with OpenAI on new AI hardware devices.


The Controversy Around GPT-5 Personality

Earlier this year, GPT-5 sparked backlash when users complained it felt colder and less empathetic.

OpenAI explained that the model was designed to reduce sycophancy, but the personality tweaks didn’t sit well with many. In response, the company:

  • Restored access to older models like GPT-4o.
  • Released updates to make GPT-5 “warmer and friendlier” without losing balance.

This underlines how delicate the process is — users want trustworthy AI, but they also expect it to feel human and supportive.


Why This Reorganization Matters

The restructuring signals three important things:

  1. AI personality is no longer secondary. It’s central to how users experience AI.
  2. Ethical responsibility is growing. OpenAI faces lawsuits, including one tragic case involving GPT-4o and a 16-year-old user.
  3. Future models will be more human-centered. By uniting personality work with post-training, OpenAI is aiming for safer, friendlier, and more reliable interactions.

What’s Next for OpenAI and AI Research

Moving forward, OpenAI will likely invest more in:

  • Balancing warmth with responsibility. AI must feel supportive without reinforcing harmful ideas.
  • Experimenting with new interfaces. OAI Labs could redefine how we interact with AI beyond text-based chats.
  • Hardware integrations. Collaborations with tech veterans like Jony Ive could lead to AI-first devices that merge design with intelligence.

This evolution shows how AI is no longer just about raw intelligence — it’s about how it feels to use.


Key Takeaways

  • The Model Behavior team is merging into OpenAI’s Post Training group.
  • Joanne Jang is launching OAI Labs to reimagine AI-human collaboration.
  • GPT-5’s personality controversy proved users care deeply about AI warmth and empathy.
  • OpenAI is moving toward human-centered AI design, balancing friendliness with responsibility.

FAQs

1. What was the Model Behavior team at OpenAI?
It was a research group focused on shaping ChatGPT’s personality, reducing bias, and avoiding sycophantic responses.

2. Why did OpenAI restructure this team?
To integrate personality development more closely with model training, ensuring consistency across future AI systems.

3. Who is Joanne Jang?
She was the founding leader of the Model Behavior team and is now launching OAI Labs to explore new AI interfaces.

4. Why did GPT-5 face criticism?
Users felt it was too cold and less empathetic compared to earlier models, sparking demands for friendlier responses.

5. What’s the future of AI personalities?
Expect AI models to become more empathetic, balanced, and integrated into human-first interfaces beyond chat.


Final Thoughts

OpenAI’s restructuring is a turning point in AI development. The shift shows that personality, empathy, and responsibility are just as vital as accuracy and speed.

For users, this means future versions of ChatGPT will likely feel more natural, supportive, and reliable — a major step toward human-centered AI.

👉 Want to explore more AI insights? Check out our AI news section on PreviewKart and stay ahead of the latest tech innovations.

Leave a Comment

Your email address will not be published. Required fields are marked *