Human psychologists work under established ethical rules and can be held accountable if they make mistakes.
This is not the case for AI systems that provide mental health advice.
This is shown in a study from Brown University, published in the Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
Researchers examined how large language models perform when asked to act as therapists.
Also read: Surviving cancer early in life may affect brain function later
The authors of the study write that they have identified 15 ethical risks associated with the use of so-called LLM counselors and call for the development of clear ethical and legal standards in this field.
Tested in practice
In the study, seven counselors with experience in cognitive behavioral therapy conducted conversations with AI models from OpenAI, Anthropic, and Meta, among others.
Three licensed psychologists then reviewed the conversations to identify potential problems.
According to the study, challenges arose in several areas. The chatbots did not always take the user’s situation into account, failed to handle crises adequately, and displayed potential bias related to gender, culture, or religion.
Also read: How salmon can affect your blood pressure
The researchers also describe what they call deceptive empathy. This refers to responses that sound caring but are not based on genuine understanding.
Need for caution
The study does not conclude that AI cannot be used in mental health care. According to the researchers, the technology may help make support more accessible.
However, the current systems do not meet the ethical standards that apply to trained therapists.
At the same time, there are currently no clear rules regarding who is responsible if an AI chatbot provides problematic advice.
Also read: Smoking affects eye cells and may accelerate vision loss
Sources: Science Daily, and Proceedings of the AAAI ACM Conference on AI, Ethics, and Society.
Also read: Ten years of data link wildfire smoke to increased urban violence
