AI Bias in Healthtech

By
4 Minutes Read
  • Healthcare data used to train AI models is biased, resulting in biased outcomes
  • Even when data is scrubbed to ensure neutrality, things slip through the cracks
  • Bias can lead to incorrect AI learning, misdiagnosis, and even death
  • Addressing bias starts with assessing bias in the data, design, and experience of AI systems

AI Bias in Healthcare

It has been well documented by scholars and laypeople alike that people of color (POC), women,  gender nonconforming people, and non-heterosexual people face greater difficulties with access to care and with receiving the same level of care that their white, heterosexual, male cisgendered counterparts receive. These are further compounded when socioeconomic status and other factors are considered, such as obesity and physical or mental disabilities. When the bias and discrimination built into the healthcare system and health data are then used to train AI, the learning model serves as a replicator and even enhancer of these systemic issues.

Understanding AI bias in healthcare is crucial to ensure fair and accurate outcomes. AI systems learn from vast amounts of data, but if that data is biased or incomplete, it can result in biased decision-making. For example, if an AI algorithm is trained on a dataset that predominantly includes data from a certain demographic group, it may not accurately represent the broader population, leading to biased predictions or recommendations.

Moreover, AI bias in healthcare can manifest in various ways. It can occur in diagnostic algorithms, treatment recommendations, patient triage, and even resource allocation. These biases can disproportionately affect marginalized communities, exacerbating existing health disparities and perpetuating discrimination.

To address AI bias in healthcare, it is important to first understand the ethical implications it presents and the potential consequences it can have on patient care and outcomes.

The Ethical Dilemma

The ethical dilemma surrounding AI bias in healthcare is multifaceted. On one hand, AI has the potential to improve healthcare access, quality, and efficiency. It can assist in diagnosing rare diseases, predicting patient outcomes, and optimizing treatment plans. However, if these AI systems are biased, they may perpetuate existing biases and contribute to health disparities.

Health disparities kill, from an inability to recognize rashes and cancers on Black and brown skin (Cassata 2020) to women being less likely to receive CPR in public due to CPR training manikins being overwhelmingly male (Treisman 2019). Language and cultural differences, too, cause problems, though these are perhaps more easily remedied by engaging in culturally sensitive and relevant practices and ensuring there are medical translators on hand who are versed in the languages of the community they are serving.

The Causes of AI Bias

To effectively address AI bias in healthcare, it is crucial to understand its underlying causes. Several factors contribute to the emergence of AI bias, including biased training data, algorithmic design choices, and societal biases reflected in the data.

As AI becomes more common in healthcare settings and research, these biases and social determinants need to be things that healthcare providers, researchers, and developers consider. It is known that non-white populations are underrepresented in deep-learning training data (Levi & Gorenstein 2023), leading to cases where Black and brown patients must be sicker than their white counterparts before an AI model assigns them the same level of risk (Obermeyer et al., 2019).

Even when data is scrubbed to ensure neutrality, things can slip through the cracks. In 2019, a team of researchers at Duke University Hospital developed an algorithm designed to predict the risk of sepsis in child patients in the emergency department (Levi & Gorenstein 2023). Despite careful quality control of the data and algorithm, it took nearly three years before a health fellow on the team identified that the doctors at Duke took longer to order diagnostic blood tests for Hispanic kids eventually diagnosed with sepsis vs. white kids. Since the AI was being trained on hospital data, this introduced the risk of the AI learning that Hispanic kids developed sepsis more slowly than white kids (Levi & Gorenstein 2023), a false algorithmic rule that could result in fatalities.

These examples show that deploying AI and deep-learning models in healthcare settings needs to be done with careful consideration of the data the algorithm is trained on; the social determinants of health that may require additional focus in the algorithm; and any potential bias both in the training data as well as in the everyday practices underlying the data. AI currently stands to be a changemaker in healthcare. Not just in technology, but also in equity. With conscientious training and deployment of AI models, healthcare providers, researchers, and developers can change the ‘standard patient’ from being one who is white, male, heterosexual, able-bodied, and cisgender, to being a multitude that represents all patients, not just a few.

Biased training data is a major cause of AI bias. If the training data predominantly includes certain demographic groups or lacks diversity, the resulting AI algorithms may not accurately represent the broader population, leading to biased predictions or recommendations. Algorithmic design choices, such as feature selection or weighting, can also introduce bias if not carefully considered and validated.

Furthermore, societal biases and prejudices can be reflected in the data used to train AI algorithms. Historical discrimination and disparities in healthcare can be inadvertently perpetuated if the data used to train AI systems reflects these biases. It is essential to critically examine the data sources and ensure diversity and fairness in the training process to mitigate AI bias.

Unraveling the causes of AI bias is crucial for developing effective strategies to mitigate bias and promote fairness in healthcare AI systems.

Consequences of AI Bias

The consequences of AI bias in healthcare can have severe implications for patient outcomes and healthcare equity. Biased AI algorithms can lead to incorrect diagnoses, delayed treatments, and inadequate care, compromising patient safety and well-being. These consequences can disproportionately affect marginalized communities, exacerbating existing health disparities.

Moreover, AI bias can perpetuate discrimination and inequality in healthcare resource allocation. Biased algorithms may allocate resources based on biased criteria, leading to unequal access to healthcare services and exacerbating health inequities. This not only violates the principles of fairness and justice but also undermines trust in healthcare systems.

The consequences of AI bias in healthcare highlight the urgent need to develop robust strategies and ethical guidelines to prevent and mitigate bias, ensuring equitable access to healthcare for all individuals.

Ethical Solutions

One ethical solution is to score AI applications for bias, safety, privacy, risk, and inclusion and to promote the transparency of those scores and explainability of AI algorithms. Healthcare organizations and AI developers should strive to make their algorithms and decision-making processes transparent, allowing for scrutiny and identification of potential biases. Additionally, AI systems should provide explanations for their recommendations or predictions, enabling healthcare providers to understand the underlying reasoning so that humans can readily assess potential biases.

Another ethical solution is to diversify the development and validation of AI algorithms. By involving a diverse group of experts and stakeholders, biases can be identified and addressed more effectively. This can help ensure that AI systems are representative and fair for all patient populations.

Furthermore, ongoing monitoring and evaluation of AI systems are essential to identify and rectify any emerging biases. Regular audits and assessments can help healthcare organizations identify biases in real-world settings and take corrective actions.

The ethical implications of AI bias in healthcare demand urgent attention and action to ensure equitable and unbiased healthcare for all individuals, regardless of their demographics or backgrounds.

Picture of Ellie Passmore

Ellie Passmore

Author