Html code here! Replace this with any non empty raw html code and that's it.

AI concluded that X-rays of knees could reveal whether you drank beer

Date:

Share this article:

Del denne artikel:

A new study shows that artificial intelligence can deliver highly accurate answers even when they make no scientific sense. This is raising concerns in health research.

Researchers asked an artificial intelligence system to determine whether people drank beer or ate bean mash based on X-ray images of their knees.

The task sounds obviously meaningless. Nevertheless, the model managed to deliver surprisingly precise results.

This very outcome forms the basis of a new study from Dartmouth Health, published in Scientific Reports.

What went wrong?

The study is based on more than 25,000 knee X-rays from the National Institutes of Health’s Osteoarthritis Initiative.

Also read: A safer method for weight loss examined in a new study

The researchers wanted to examine how medical AI models learn to identify supposed hidden lifestyle patterns.

According to the researchers, the models rely on what is known as algorithmic shortcut learning.

This means that the systems identify easily recognizable patterns that have nothing to do with health or biology.

In this case, these included differences in X-ray equipment, the year the images were taken, and where the images were captured, according to Scientific Reports.

Also read: What Donald Trump’s daily eating habits look like

Accurate but misleading

Peter Schilling, an orthopedic surgeon and co-author of the study, explains that AI can detect patterns that humans overlook.

The problem is that these patterns are not necessarily relevant.

According to Popular Science, the researchers warn that high accuracy can create a false sense that the results are reliable.

Attempts to eliminate this type of error were only partially successful. When one shortcut was closed off, the model simply found another.

Also read: This is what happens in your family when alcohol is removed from everyday life for a month

More oversight

The study shows that artificial intelligence can appear convincing without understanding the context behind its answers.

According to co-author Brandon Hill, this could lead doctors and researchers to place too much trust in AI-generated results.

The researchers therefore argue that requirements for oversight and documentation should be far stricter before the technology is widely used in the healthcare system.

Sources: Popular Science, and Scientific Reports.

Also read: New influenza variant brings heavier pressure on hospitals

Also read: Why the time of day matters during a heart attack

Other articles

New research shows why memory loss can accelerate

New large-scale international research shows that the relationship between changes in the brain and memory does not follow a straight line.

Seeing something cute can change how we think and act

Why do cute babies and animals trigger such strong reactions? Research shows that cuteness affects the brain, moral judgment, and even our physical impulses.

Study explores new way to activate immune cells inside cancer tumors

A new study describes an alternative way of using the body’s own immune cells to treat cancer.

Here’s why people fail when they try to quit snus

Many snus users struggle to quit, even when the desire is there. New research shows why a decision alone is often not enough.

New research shows why memory loss can accelerate

New large-scale international research shows that the relationship between changes in the brain and memory does not follow a straight line.

Seeing something cute can change how we think and act

Why do cute babies and animals trigger such strong reactions? Research shows that cuteness affects the brain, moral judgment, and even our physical impulses.

Study explores new way to activate immune cells inside cancer tumors

A new study describes an alternative way of using the body’s own immune cells to treat cancer.