Artificial intelligence determines who gets care: Algorithmic bias in post-acute care decision-making

Artificial intelligence-driven decision-making tools are increasingly determining which post-acute care services patients receive and which they do not. As a health tech CEO working with hospitals, skilled nursing facilities (SNFs), and accountable care organizations (ACOs) across the country, I’ve witnessed algorithms recommend needed services in ways that raise red flags. In one high-profile case, an insurance company's software predicted that an 85-year-old patient would recover from a serious injury in 16.6 days. On day 17, her nursing home recovery payments were cut off, although she was still in pain and unable to dress or walk on her own. A judge later dismissed the decision as “speculative” but by then she had exhausted her savings to pay for the care she would have received. Unfortunately, this example is not an isolated incident. It highlights how algorithmic biases and heavy automation impact coverage determinations for home health aides, medical devices, rehabilitation hospitalizations and respite care.
Researchers have found that some healthcare algorithms inadvertently replicate human biases. A widely used program to identify high-risk patients has been shown to systematically favor less ill white patients over more seriously ill black patients because it uses medical spending as a proxy for need. Fewer dollars were spent on Black patients with the same condition, so the algorithm underestimated their risk, effectively denying many Black patients access to additional care management until the bias was discovered. This bias can easily translate into biased coverage approvals if the algorithm relies on demographic or socioeconomic data.
I have observed that AI-based underwriting tools consider non-clinical variables such as a patient’s age, zip code, or “living situation,” which can create problems. Incorporating social determinants into algorithms is a double-edged sword: In theory it can improve care, but experts warn it often creates disparities. For example, using zip code or income data may reduce access to services for poor patients if not handled properly. In practice, I've seen patients from underserved communities receive fewer approvals for home medical time, as if the software assumes these communities can make do with less time. Bias may not be intentional, but when the design of algorithms or data reflect systemic inequalities, disadvantaged groups pay the price.
Flawed assumptions in discharge planning
Another subtle form of bias comes from flawed assumptions in discharge planning tools. Some hospital case management systems now use AI predictions to recommend post-discharge care plans, but they don’t always properly account for human factors.
A common problem with AI-based decisions about discharge planning, respite care, and medical equipment is that the algorithms make assumptions about home care or additional support. In theory, knowing a patient has family at home should help ensure support. However, these systems do not know whether relatives are able or willing to provide care. We had a case where the discharge software flagged an elderly stroke patient as low risk because he lived with his adult son, which meant someone would be at home to help. What the algorithm didn’t know was that the son worked two jobs and was away from home most of the time. This tool almost sends patients home with minimal home health support, which could have ended in a disaster or emergency hospital visit if our team had not intervened. This is no longer just an assumption, as even federal nursing guidance warns to never assume that a family member in the hospital will become a caregiver at home. Yet artificial intelligence misses this nuance.
These tools lack the humanistic context of family dynamics and understanding of the differences between willing, capable caregivers and those who are absent, elderly, or overwhelmed. Clinicians can pick up on this difference; computers usually can't. The result is that some patients end up without the services they really need.
Steps to Correct Errors in Algorithmic Care
As the implementation of advanced technologies accelerates across the healthcare continuum, especially for use in post-acute critical care, mistakes like the one I mentioned above are bound to occur. The difference is that the impact of these errors is felt more deeply by vulnerable and diverse patient populations who already face significant challenges, particularly in our most critical areas of care. Nonwhite patients often find themselves at higher risk for readmission, a risk further increased by low income and lack of insurance.
If there's a silver lining, it's that the healthcare industry is starting to think about these issues. Exposing biased and opaque AI solutions has sparked calls for change and concrete steps. Regulators, for example, are already getting involved. The Centers for Medicare and Medicaid Services recently proposed new rules that would limit the use of black-box algorithms in Medicare Advantage coverage decisions. If accepted, from next year insurers will have to ensure that prediction tools take into account each patient's individual circumstances, rather than blindly applying a general formula. Qualified clinicians also need to review rejections of AI recommendations to ensure they are consistent with medical reality. These proposed policy initiatives echo what frontline experts have been advocating: that algorithms should assist, rather than override, sound clinical judgment. This is a welcome step towards changing and correcting the mistakes made so far, although execution will be key. We can and must do better to ensure that our smart new tools actually see individuals—making them as transparent, impartial, and compassionate as the caregivers we want for their families. Ultimately, using AI to reimagine post-acute care should be about improving outcomes and equity, not saving money at the expense of vulnerable patients.
Photo: ismagilov, Getty Images
Dr. Afzal is a visionary in healthcare innovation who has been advancing value-based care models for more than a decade. As co-founder and CEO of Puzzle Healthcare, he leads a nationally recognized company specializing in post-acute care coordination and readmission reduction. Under his leadership, Puzzle Healthcare has received accolades from many of the nation's top healthcare systems and ACOs for superior patient outcomes, improved care delivery, and effective reductions in readmissions.
This article appeared in Medical City Influencers program. Anyone can share their thoughts on healthcare business and innovation on MedCity News through MedCity Influencers. Click here to learn how.



