It started with a conversation between 14-year-old Zeynep Demirbas and a family friend who worked as a psychologist.
The friend mentioned that some insurance companies were exploring artificial intelligence tools like ChatGPT for mental health support.
The idea sounded promising: AI is cheaper, faster, and more accessible than traditional therapy.
But Zeynep Demirbas had doubts. She already knew that AI could make mistakes or agree with false statements.
Also read: How to lower blood pressure fast without medication, according to experts
That made her wonder whether people should really trust chatbots with something as sensitive as mental health.
Testing AI’s ability to detect stress
To find out, Zeynep Demirbas designed a project to test several AI models, known as large language models (LLMs).
She wanted to see if they could accurately detect stress in human writing.
She collected more than 3,500 Reddit posts. Each post had already been labeled by human raters as either containing stress or not.
Also read: Experts explain how to spot hidden mould in dates
Then, she asked different AI models to analyze the same posts and identify which ones showed signs of stress.
To measure their performance, Zeynep Demirbas used something called an F1-score. This score balances accuracy with how often the models made mistakes.
The results were surprising. A model specifically built for mental health performed best, scoring about 82 percent. ChatGPT, however, reached only about 74 percent.
Even more surprising, a much simpler model known as “random forest” outperformed ChatGPT.
Also read: The science of skin spots: What’s harmless and what’s not
That model, based on a set of decision trees, is considered an older and less advanced technique.
What her research reveals
Zeynep Demirbas's findings suggest that chatbots are far from ready to replace real therapists.
While AI can process information quickly, it lacks emotional understanding and human empathy.
However, she also believes that AI could still play a supporting role.
Also read: Five common drinks that may be putting your kidneys under pressure
Instead of serving as a therapist, AI might help identify people who are struggling and connect them with professionals who can help.
Zeynep hopes to continue her research by exploring whether AI systems show bias toward different genders.
Since these models are trained on human language, they may reflect the same stereotypes found in society.
Her project earned her a finalist spot in the 2025 Thermo Fisher Scientific Junior Innovators Challenge, a competition run by the Society for Science.
Also read: New research suggests vaping could be more harmful than smoking
This article is based on information from Science News Explores.
