Artificial intelligence helps doctors predict patients’ risk of dying, study finds: ‘Sense of urgency’

With research showing that only 22% of Americans keep a written record of their end-of-life wishes, a team at OSF HealthCare in Illinois is using artificial intelligence to help physicians determine which patients have a higher chance of dying during their hospital stay.

The team developed an AI model that is designed to predict a patient’s risk of death within five to 90 days after admission to the hospital, according to a press release from OSF. 

The goal is for the clinicians to be able to have important end-of-life discussions with these patients.

WHAT IS ARTIFICIAL INTELLIGENCE?

“It’s a goal of our organization that every single patient we serve would have advanced care planning discussions documented, so we could deliver the care that they wish — especially at a sensitive time like the end of their life, when they may not be able to communicate with us because of their clinical situation,” said lead study author Dr. Jonathan Handler, OSF HealthCare senior fellow of innovation, in an interview with Fox News Digital.

If patients get to the point where they are unconscious or on a ventilator, for example, it may be too late for them to convey their preferences.

Dr. Jonathan Handler

Lead study author Dr. Jonathan Handler is senior fellow of innovation with OSF HealthCare in Illinois. His team developed an AI model that’s designed to predict a patient’s risk of death within five to 90 days after admission to the hospital.  (OSF HealthCare)

Ideally, the mortality predictor would prevent the situation in which patients might die without getting the full benefit of the hospice care they might have gotten if their plans were documented sooner, Handler said.

Given that the length of a typical hospital stay is four days, the researchers chose to start the model at five days, ending it at 90 days for a “sense of urgency,” the researcher noted.

NEW AI-GENERATED COVID DRUG ENTERS PHASE I CLINICAL TRIALS: ‘EFFECTIVE AGAINST ALL VARIANTS’

The AI model was tested on a data set of more than 75,000 patients across different races, ethnicities, genders and socioeconomic factors.

The research, recently published in the Journal of Medical Systems, showed that among all patients, the mortality rate was one in 12 people.

But for those who were flagged by the AI model as more likely to die during their hospital stay, the mortality rate increased to one in four — three times higher than the average.

OSF building

A team at OSF HealthCare in Illinois (shown here) is using artificial intelligence to help physicians determine which patients have a higher chance of dying during their hospital stay. (OSF HealthCare)

The model was tested both before and during the COVID-19 pandemic, with nearly identical results, the research team said.

The patient mortality predictor was trained on 13 different types of patient information, said Handler. 

“That included clinical trends, like how patients’ organs are functioning, along with how often they’ve had to visit the health care system, the intensity of those visits, and other information like their age,” he said. 

“Then the artificial intelligence uses that information to make a prediction about the likelihood that the patient will die within the next five to 90 days.”

STUDENTS USE AI TECHNOLOGY TO FIND NEW BRAIN TUMOR THERAPY TARGETS — WITH A GOAL OF FIGHTING DISEASE FASTER

The model provides a physician with a probability, or “confidence level,” as well as an explanation as to why the patient has a higher than normal risk of death, Handler said.

“At the end of the day, the AI takes a bunch of information that would take a long time for a clinician to gather, analyze and summarize on their own — and then presents that information along with the prediction to allow the clinician to make a decision,” he said.

Life flight

A life flight heads to Saint Francis Medical Center, part of OSF HealthCare. (OSF HealthCare)

The OSF researchers were inspired by a similar AI model built at NYU Langone, Handler said.

“They had created a 60-day mortality predictor, which we attempted to replicate,” he said. 

“We think we have a very different population than they do, so we used a new kind of predictor to get the performance that we were looking for, and we were successful in that.”

“Ultimately, our goal is to meet the patients’ wishes and provide them with the end-of-life care that best meets their needs.”

The predictor “isn’t perfect,” Handler admitted; just because it identifies an increased risk of mortality doesn’t mean that’s going to happen. 

“But at the end of the day, even if the predictor is wrong, the goal is to stimulate the clinician to have a conversation,” he said.

“Ultimately, we want to meet the patients’ wishes and provide them with the end-of-life care that best meets their needs,” Handler added.

Woman end of life AI

The goal is for the clinicians to have enough time to have important end-of-life discussions with those patients, researchers said. (iStock)

The AI tool is currently in use at OSF, as Handler noted that the health care system “attempted to integrate this as seamlessly as possible into the clinicians’ workflow in a way that supports them.”

“We are now in the process of optimizing the tool to ensure that it has the greatest impact, and that it supports a deep, meaningful and thoughtful patient-clinician interaction,” Handler said. 

AI expert points out potential limitations

Dr. Harvey Castro, a Dallas, Texas-based board-certified emergency medicine physician and national speaker on artificial intelligence in health care, said he recognizes the potential benefits of OSF’s model, but pointed out that it may have some risks and limitations.

One of those is potential false positives. “If the AI model incorrectly predicts a high risk of mortality for a patient who is not actually at such risk, it could lead to unnecessary distress for the patient and their family,” Castro said.

“End-of-life discussions are sensitive and can have profound psychological effects on a patient. Health care providers should combine AI predictions with a compassionate human touch.”

False negatives present another risk, Castro pointed out. 

“If the AI model fails to identify a patient who is at high risk of mortality, crucial end-of-life discussions might be delayed or never take place,” he said. “This could result in the patient not receiving the care they would have wished for in their final days.”

Doctor using AI

“Ethical exploration of AI’s role in health care is paramount, especially when dealing with life and death predictions,” Castro said. (iStock)

Additional potential risks include an over-reliance on AI, data privacy concerns, and possible bias if the model is trained on a limited dataset, which could lead to disparities in care recommendations for other patient groups, Castro warned.

CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER

These types of models should be paired with human interaction, the expert noted.

“End-of-life discussions are sensitive and can have profound psychological effects on a patient,” he said. “Health care providers should combine AI predictions with a compassionate human touch.”

CLICK HERE TO SIGN UP FOR OUR HEALTH NEWSLETTER

Continuous monitoring and feedback are crucial to ensure that such models remain accurate and beneficial in real-world scenarios, the expert added.

“Ethical exploration of AI’s role in health care is paramount, especially when dealing with life and death predictions.”

Check Also

TreeHouse frozen waffle recall expanded to include pancakes due to possible listeria

Frozen waffles recalled over listeria concerns What to know about the frozen waffle recall due …