Html code here! Replace this with any non empty raw html code and that's it.

Researchers question safety of AI as a digital therapist

Date:

Share this article:

Del denne artikel:

More people are now using AI chatbots to talk about personal problems. But new research raises doubts about how safe these digital conversations really are.

Human psychologists work under established ethical rules and can be held accountable if they make mistakes.

This is not the case for AI systems that provide mental health advice.

This is shown in a study from Brown University, published in the Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.

Researchers examined how large language models perform when asked to act as therapists.

Also read: Surviving cancer early in life may affect brain function later

The authors of the study write that they have identified 15 ethical risks associated with the use of so-called LLM counselors and call for the development of clear ethical and legal standards in this field.

Tested in practice

In the study, seven counselors with experience in cognitive behavioral therapy conducted conversations with AI models from OpenAI, Anthropic, and Meta, among others.

Three licensed psychologists then reviewed the conversations to identify potential problems.

According to the study, challenges arose in several areas. The chatbots did not always take the user’s situation into account, failed to handle crises adequately, and displayed potential bias related to gender, culture, or religion.

Also read: How salmon can affect your blood pressure

The researchers also describe what they call deceptive empathy. This refers to responses that sound caring but are not based on genuine understanding.

Need for caution

The study does not conclude that AI cannot be used in mental health care. According to the researchers, the technology may help make support more accessible.

However, the current systems do not meet the ethical standards that apply to trained therapists.

At the same time, there are currently no clear rules regarding who is responsible if an AI chatbot provides problematic advice.

Also read: Smoking affects eye cells and may accelerate vision loss

Sources: Science Daily, and Proceedings of the AAAI ACM Conference on AI, Ethics, and Society.

Also read: Ten years of data link wildfire smoke to increased urban violence

Other articles

New device restores partial sight after vision loss

A tiny wireless implant is helping people with severe vision loss regain the ability to read and see details again.

Surviving cancer early in life may affect brain function later

More children and young people survive cancer today. However, new research shows that the disease and its treatment may have consequences that only become apparent many years later.

How salmon can affect your blood pressure

Salmon is often highlighted as a heart-healthy choice, but how does it actually affect blood pressure?Here is an...

Smoking affects eye cells and may accelerate vision loss

Smoking is often associated with lung disease and heart problems. However, new research shows that tobacco smoke can also damage the eyes.

New device restores partial sight after vision loss

A tiny wireless implant is helping people with severe vision loss regain the ability to read and see details again.

Surviving cancer early in life may affect brain function later

More children and young people survive cancer today. However, new research shows that the disease and its treatment may have consequences that only become apparent many years later.

How salmon can affect your blood pressure

Salmon is often highlighted as a heart-healthy choice, but how does it actually affect blood pressure?Here is an...