Html code here! Replace this with any non empty raw html code and that's it.

Researchers question safety of AI as a digital therapist

Date:

Share this article:

Del denne artikel:

More people are now using AI chatbots to talk about personal problems. But new research raises doubts about how safe these digital conversations really are.

Human psychologists work under established ethical rules and can be held accountable if they make mistakes.

This is not the case for AI systems that provide mental health advice.

This is shown in a study from Brown University, published in the Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.

Researchers examined how large language models perform when asked to act as therapists.

Also read: Surviving cancer early in life may affect brain function later

The authors of the study write that they have identified 15 ethical risks associated with the use of so-called LLM counselors and call for the development of clear ethical and legal standards in this field.

Tested in practice

In the study, seven counselors with experience in cognitive behavioral therapy conducted conversations with AI models from OpenAI, Anthropic, and Meta, among others.

Three licensed psychologists then reviewed the conversations to identify potential problems.

According to the study, challenges arose in several areas. The chatbots did not always take the user’s situation into account, failed to handle crises adequately, and displayed potential bias related to gender, culture, or religion.

Also read: How salmon can affect your blood pressure

The researchers also describe what they call deceptive empathy. This refers to responses that sound caring but are not based on genuine understanding.

Need for caution

The study does not conclude that AI cannot be used in mental health care. According to the researchers, the technology may help make support more accessible.

However, the current systems do not meet the ethical standards that apply to trained therapists.

At the same time, there are currently no clear rules regarding who is responsible if an AI chatbot provides problematic advice.

Also read: Smoking affects eye cells and may accelerate vision loss

Sources: Science Daily, and Proceedings of the AAAI ACM Conference on AI, Ethics, and Society.

Also read: Ten years of data link wildfire smoke to increased urban violence

Other articles

New research reveals the healthiest way to drink tea

A new research review suggests that tea may be associated with a lower risk of several chronic diseases.However,...

Multivitamins may affect the body’s aging, new study shows

New research provides a clearer answer to what multivitamins actually do to the body.

Want to live longer? Exercise may be more important than supplements

Many people take supplements to live longer, but researchers point to a completely different daily habit as having a far greater impact.

Pre-workout supplements may negatively affect young people’s sleep, new study shows

Supplements taken before workouts are used by many young people. However, new research points to a possible consequence for sleep.

New research reveals the healthiest way to drink tea

A new research review suggests that tea may be associated with a lower risk of several chronic diseases.However,...

Multivitamins may affect the body’s aging, new study shows

New research provides a clearer answer to what multivitamins actually do to the body.

Want to live longer? Exercise may be more important than supplements

Many people take supplements to live longer, but researchers point to a completely different daily habit as having a far greater impact.