Artificial intelligence (AI) has the potential to dramatically improve patient outcomes. AI uses algorithms to evaluate data from around the world, represent that data, and use that information to make inferences. From handling administrative tasks to proactively diagnosing diseases, AI has the potential to make treatments faster and more effective in clinical settings, especially as technology continues to improve.
However, AI can suffer from bias, which has significant implications for healthcare. The term “algorithmic bias” describes this problem. It was first defined by the co-directors of. AI for Healthcare: Concepts and Applications and AI innovation in healthcare Program at Harvard School of Public Health: Trishan Panch, primary care physician, HSPH Alumni Association President-elect, and co-founder of digital health company Wellframe, and Heather Mattie, lecturer in biostatistics and co-director of the Department of Health. Mr. Data Science Master’s Program.
In a 2019 paper, Journal of Global Health “Artificial Intelligence and Algorithmic Bias: Implications for Health Systems” Punch, Matti and Rifat Attun examine algorithmic bias based on socio-economic status, race, ethnic background, religion, gender, disability, or sexual factors. defined as the application of algorithms that exacerbate existing inequalities in It changes direction and amplifies inequalities in our health care system.
In other words, healthcare technology algorithms don’t just reflect social inequalities, they can ultimately exacerbate them. What does this actually mean, how does it manifest itself, and how can it be addressed?
How does algorithmic bias in healthcare occur and why does it cause so much harm to patients?
Algorithmic bias is not a new problem, nor is it unique to AI. In reality, an algorithm is just a series of steps. Recipes and exercise plans are as much algorithms as complex models. At the heart of all health system challenges, including algorithmic bias, lie questions of values: what medical outcomes are socially important and why. How much money should go into health care? And who benefits from improved outcomes? “This is as much a societal problem as it is an algorithmic problem,” says Punch. he says.
“Looking at algorithmic bias as just a technical problem leads to engineering solutions. For example, how can we restrict certain areas of data, such as race or gender? If the world looks a certain way, that will be reflected, directly or through proxy, in the data and therefore in decision-making.”
In fact, the simple predictive rules for heart disease that have been used in routine medical practice in developed countries for decades have been demonstrated to be biased. The Framingham Heart Study cardiovascular risk score showed very favorable outcomes for white patients, but not for African American patients. This means that care can be unequally distributed and imprecise. In the field of genomics and genetics, it is estimated that white people account for approximately 80 percent of the data collected, and therefore research may be more applicable to that group than to other underrepresented groups. .
Brian Powers, Faculty Applied artificial intelligence for healthcarewrote a groundbreaking paper in 2019 science It showed that algorithms commonly used by some prominent health systems today are racially biased. This has direct and potentially harmful effects on patients, as medical professionals use this information to recommend medical care to specific people.
“The inequalities that underpin bias already exist in society, and there will probably always be some degree of bias because it affects who has the opportunity to build algorithms and for what purposes. This will require significant action and collaboration between the private sector, governments, academia and civil society.”
What can data science teams do to prevent and reduce algorithmic bias in healthcare?
According to Mattie, “Bias can creep in anywhere in the process of creating an algorithm: from the beginning, from study design and data collection, to data entry and cleaning, to algorithm and model selection, to implementation and dissemination of results. Bias has a trickle-down effect and must be addressed at every step of the process.
Combating algorithmic bias therefore means that data science teams need to include experts with diverse backgrounds and perspectives, not just data scientists with a technical understanding of AI. Masu.inside Journal of Global Health In their paper, Punch and Mattie suggested that clinicians should also be part of these teams, as this would give them a deeper understanding of the clinical situation that would improve modeling.
“There is a trade-off between algorithmic performance and bias,” says Punch. “The inequalities that underpin bias already exist in society, and there will probably always be some degree of bias because it affects who has the opportunity to build algorithms and for what purposes. This will require significant action and collaboration between the private sector, governments, academia and civil society.”
It takes time. In the meantime, we need to pay attention to specific disadvantaged groups and try to ‘protect’ them so that they have access to greater care. This is often done by setting artificial standards in algorithms that overemphasize these groups and downplay others. This is technically difficult and unproven, but pioneering research work is being done in this area.
“Increasing accuracy for an entire group of people reduces overall accuracy. People try to make overall accuracy as high as possible, so a lot of bias creeps in there. If you are only fair to the group, you will have problems in the future,” says Matti.
This further highlights the need for broader and more diverse teams to collect, analyze and research data. “Getting as many eyes and evaluations into the process as possible is a really good start, and eventually checklists and safeguards may be put in place along the way,” says Mattie. say.
“Bias can be introduced anywhere in the process of creating an algorithm: from the initial stages, to study design and data collection, to data entry and cleaning, to algorithm and model selection, to implementation and dissemination of results. ”
What can minimize algorithmic bias at scale and effectively harness the power of AI?
Today, more medical professionals are at least aware of algorithmic bias. Many companies are taking proactive steps to promote diversity, equity, and inclusion (DEI) within their teams. “Without this work, it will be impossible to address implicit and explicit bias in the people who develop the algorithms and the data-generating processes they study,” says Punch.
There are currently two approaches attempting to combat algorithmic bias in healthcare systems on an industry-wide scale.
- Aligning incentives: If researchers and other experts can prove that their data analysis is biased, they can take advantage of the law through class action lawsuits. This would encourage private companies to change or pre-emptively address bias before it occurs.
- formal law: No legal action has been taken yet. Current law protects certain variables by removing items that may lead to unfair judgments, such as race, gender, socio-economic background, and disability. But these medical algorithms need such elements so that these groups can receive appropriate care. Current legislation does not yet account for this.
Additionally, researchers continue to refine this algorithm development process to not only optimize performance but also minimize bias. Ideally, a system of checks and balances will ultimately help to minimize errors more regularly and ensure the sustainability of health gains over time.
“There’s no silver bullet for this,” Mattie says. “But we can take steps to minimize bias as much as possible.”
The Harvard T.H. Chan School of Public Health offers AI for Healthcare Concepts and Applications, Innovation with AI in Healthcare, and Implementing AI in Healthcare into Clinical Practice, which provides basic concepts and information about AI in healthcare. , explore how AI can support your organization’s strategy and advocacy. Serve the patient.