BusinessHealthNewsTechnology

The Impact of Biased Machine Learning in Healthcare: A Closer Look

Health insurers rely on predictive models to identify high-risk members. These models are used to develop strategies for member outreach and interventions. But what if those models are biased? A growing body of research shows that existing healthcare disparities get perpetuated and amplified through thoughtless or excessive reliance on AI.

Misrepresentation of Data

The underlying assumption behind medical algorithms is that they apply objective data to make accurate conclusions. However, the reality is that doctors are fallible, and their decisions are biased, which can result in deeply flawed data for any machine learning model relying on it. Even when good intentions are involved, ML can amplify existing healthcare disparities with thoughtless or excessive reliance on automated technologies. ML may be vulnerable to the same biases in observational studies, including those inherent to the technology and introduced during its design and development. Examples of ML biases include anchoring (choosing or relying on information that is already known), confirmation bias (choosing or relying upon data that aligns with currently held beliefs or hypotheses), and availability bias (choosing or relying only on available information).

Moreover, disadvantaged populations often receive care in multiple health systems, which can result in gaps in data or a heavy weighting towards the most prevalent data set. For example, one study found that an algorithm marketed and sold to hospitals to predict patients needing additional healthcare management performed poorly for black patients. The reason is that the algorithm was anchored to past spending data, which did not factor in the societal prejudices that cause white patients to receive less expensive treatment.

Lack of Oversight

While AI is making its way into healthcare, many algorithms ML healthcare systems use need to be properly evaluated and validated. As a result, they may contain hidden biases that unintentionally affect patients. Bias machine learning risks your company’s reputation, especially in healthcare, where the stakes are higher. For example, in 2019, a study found that an algorithm many hospitals use to determine which patients should receive care displayed racial bias. The model used past healthcare spending as a proxy for a patient’s health status and incorrectly assumed Black patients were healthier than equally sick white patients. Algorithms that should be regularly reviewed or validated can amplify existing healthcare disparities. This can occur in various ways, including underrepresenting certain populations in digital datasets or needing more effort to validate models externally. In addition, a lack of oversight can lead to the systematic misrepresentation of patients and communities by ML algorithms in healthcare. In a healthcare setting, this can result in a patient receiving less effective care or being denied access to treatment. Ultimately, these errors can have life-threatening consequences. This is why it is so important for companies to incorporate known methods of evaluating and addressing algorithmic bias into their work. It is also critical for healthcare institutions to create a culture of inclusivity and equity when leveraging new tools like AI & ML.

Inconsistency

As AI grows, it becomes increasingly important to recognize and mitigate biases at each process step. This includes data collection, model training and evaluation. Bias can occur when the data used to train an algorithm differs from the population it will be used for or if more steps are needed to ensure the model is accurate and fair. Annotation bias can also occur when the process of annotating data for ML models is overseen by humans who let their own biases and perspectives influence the process. In a study of dermatological ML algorithms, for example, the authors found that annotation bias led to significant health inequities. Unless we can ensure that ML models are trained on data sets that represent all countries and clinical specialties equally or are externally validated on diverse patient populations, AI technology could magnify existing healthcare inequity and deepen the chasm of disparities in global healthcare. This requires a holistic approach that is as much about society as it is about algorithms. Addressing these concerns isn’t as simple as “just making sure an algorithm doesn’t have racial bias.” The root causes are complex and often intertwined with anti-minority culture and discrimination, which must be addressed to achieve true equity.

Lack of Diversity

Medical ML teams must take a more diverse approach to their work, incorporating experts with different clinical backgrounds and perspectives into their research, data collection, model development, and implementation. This is because bias can creep into the process anywhere, from the beginning of an algorithm’s creation, like in the study design or data collection, to later steps, such as data cleaning or model selection. The diversity of healthcare providers is also important, as this will help mitigate racial and other algorithm biases. However, it is also possible that a health system may not represent all the patients in its geographic region, leading to the algorithms failing to predict or detect problems for those groups. For example, a recent study found that sepsis detection algorithms based on data from hospitals missed cases of sepsis in Hispanic children due to doctors taking Hispanic kids’ symptoms less seriously. Bias in machine learning can have societal, legal, and financial implications.

Moreover, it can exacerbate existing inequalities and amplify discriminatory practices already embedded in the healthcare system. Attempting to fix biased algorithms without understanding the deeper roots of these issues is like trying to cure a disease by treating only one organ in the body. The whole system must be addressed to prevent the continued marginalization of disadvantaged populations in healthcare.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button