Stay up-to-date in pulmonary and critical care. No spam.
ICU Prognosis is Easy, Except When It's Not
How accurate are predictions of mortality by ICU team members? For 560 consecutive patients admitted to a single MICU at the University of Chicago, the authors polled the patient’s attending, fellow, resident, and nurse privately each day, asking simply: “Will this patient survive to discharge?” They collected a total of >6,000 predictions on >2,000 patient-days, and the results were rich and fascinating.
What they found was really a tale of two ICUs, with 72 hours as the turning point. After 3 days, only 20% of the original sample remained in the ICU; the rest had transferred out (the vast majority) or had died.
Of the 433 survivors:
- >75% made it out of the ICU by day 3. Only ~100 in the surviving cohort remained in the ICU after 72 hours.
- 77% had unanimous predictions of survival on all days.
- However, 99 (23% of survivors) had at least one prediction of death;
- 15 patients (3% of survivors) lived despite one or more days of unanimous predictions of death by their care team. And of those unanimously predicted to die on 3+ days, 12% survived.
Of the 127 patients who died:
- ~50% had died by ICU day 3. After 72 hours, only ~60 of the doomed patients remained.
- 72 (57%) were unanimously predicted to die on each and every day they were in the ICU.
- Of the 55 others, 35 (27% of those who died) were unanimously predicted to survive on at least one day.
- One day of unanimous prediction of death had about an 84% positive predictive value for death prior to discharge. When there was any disagreement, predictions of death were correct only 52-66% of the time (depending on the number of dissenters).
Patients were by and large African-American and had low socioeconomic status, and were relatively young (median age 55). The minority with DNR status (25%) represented 77% of deaths. Only 7% of those with “full code” status died.
Uncertainty and risk predominated after a few days in the ICU. At 72 hours, the remaining MICU patients had an eventual mortality of 50%, and this coin-flip prognosis held steady at least through ICU day 10 (when 42 of the original patients were still alive in the ICU). The treating team members’ predictions reflected this uncertainty, as they were discordant on 75% of the patients with >3 day ICU stays.
Though limited, this was a remarkable and informative longitudinal study of MICU survival that revealed some of the limits of prognosis. Unfortunately, the authors did not specify which team members disagreed, or whether predictive accuracy increased with experience. This was a single MICU with mostly impoverished urban African-American patients, so results may not be generalizable. Richer outcomes data would have been valuable: we don't know how debilitated or neurologically impaired the survivors were, or how many died in a nursing home or LTAC weeks or months later.
Lead author William Meadow, MD, PhD responds:
Predictions in the MICU of survival vs. death are imperfect, and if we try to make them more accurate (either by requiring serial days of consistent predictions, or unanimity by multiple observers) the positive predictive value goes up, but the sensitivity goes down. The area under the curve doesn't seem to get any higher. In the end, we're wrong on about half of all patients predicted by any one observer.
The discordant predictions data are pretty interesting. Attendings were more optimistic (i.e., when team members disagreed, attendings were more likely to predict survival); nurses were more likely to predict death when there was discordance. Since, overall, roughly half of the patients with discordant predictions lived, nurses and attendings were roughly equally accurate. Even more surprisingly, experience among team members didn't seem to matter much.
No question, MICU follow-up is crucially important for the next study. We need to know how functional these people are who've been predicted to die, yet survive to discharge. Imagine if all of them went to hospice and only lived another month: the predictions that seemed "wrong" would be viewed differently with a different outcome variable.