OpenAI Making ChatGPT More Sensitive To Mental Health

The tool will now be better at detecting signs of distress so ChatGPT can better respond appropriately.

OpenAI is seeking to make ChatGPT more sensitive to users’ mental health.

Starting this week, the chatbot app will prompt users to take breaks from lengthy conversations. The tool will also refrain from giving direct advice regarding personal challenges. Instead, it will help users decide for themselves by asking questions or weighing pros and cons.

OpenAI said the move comes after there have been situations where the chatbot fell short in “recognizing signs of delusion or emotional dependency.”

The tool will now be better at detecting signs of mental or emotional distress so ChatGPT can better respond appropriately and point people to evidence-based resources when needed.

The updates appear to be a continuation of OpenAI’s attempt to keep users, particularly those who view ChatGPT as a therapist or a friend, from becoming too reliant on the emotionally validating responses the tool has gained a reputation for.