Artificial intelligence (AI) is advancing rapidly and will likely become an important tool to support clinical care. Early research indicates AI algorithms can accurately detect melanomas and predict future breast cancers.

However, before AI becomes routine in the clinic, the challenge of algorithmic bias must be addressed. AI systems may have inherent biases leading to discrimination and privacy breaches. They could also be making decisions without necessary human oversight.

Artificial intelligence (AI) is advancing rapidly and will likely become an important tool to support clinical care. Click To Tweet

An example of AI’s potentially harmful effects comes from an international project using AI to develop breakthrough treatments. In an experiment, the team reversed their “good” AI model to generate chemical warfare agents, many more toxic than current agents. Though an extreme case, it’s a wake-up call to evaluate AI’s ethical consequences, both known and unknowable.

In medicine we handle private data and life-altering choices, so robust AI ethics frameworks are crucial. The Australian Epilepsy Project seeks to improve lives and expand care access. Using advanced brain imaging and data from thousands with epilepsy, we plan for AI to answer currently unanswerable questions about seizures, medicines, and surgery.

AI needs substantial data, so more funding mechanisms are crucial for gathering appropriate clinical data. We should also understand how AI reaches conclusions, referred to as explainability Click To Tweet

The main worry is that AI is advancing rapidly while regulatory oversight lags. That’s why we recently established an ethical framework for using AI as a clinical aid. It intends to ensure our AI is open, safe, trustworthy, inclusive and fair.

So how do we implement ethical medical AI to reduce bias and maintain algorithm control? The principle “garbage in, garbage out” applies. Biased small data produces biased non-replicable algorithms.

Bias abounds in popular AI like ChatGPT and Stable Diffusion. Simple prompts generate stereotyped images. A doctor prompt produces mostly male images, though half of doctors are female.

Solutions to bias aren’t simple. Promoting health equality and diversity in studies helps combat medical AI bias. The FDA’s proposed mandate for diverse clinical trials moves towards less biased, community-based research.

AI needs substantial data, so more funding mechanisms are crucial for gathering appropriate clinical data. We should also understand how AI reaches conclusions, referred to as “explainability”. Humans and machines must collaborate for optimal results, so we prefer “augmented” over “artificial” intelligence. Algorithms should aid, not control, decision-making.

In addition to explainable algorithms, we support open and transparent science. Researchers should publish AI model details to enhance reproducibility. Click To Tweet

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Translate »
A note to our visitors

This website has updated its privacy policy in compliance with changes to European Union data protection law, for all members globally. We’ve also updated our Privacy Policy to give you more information about your rights and responsibilities with respect to your privacy and personal information. Please read this to review the updates about which cookies we use and what information we collect on our site. By continuing to use this site, you are agreeing to our updated privacy policy.