This image shows two labelled swab tubes with red caps and a glass jar full of blue and white pills, floating against a grey background and refracted in different ways by a fragmented glass grid. This grid is a visual metaphor for the way that new artificial intelligence (AI) and machine learning technologies can be used to extract and analyse medical data in innovative ways. Some of the grid squares reveal graphical interpretations of the objects that exceed the capabilities of human vision, which indicates how cutting edge technologies offer ways to augment traditional human understandings of complex phenomena. A neural network diagram is overlaid, familiarising the viewer with the formal architecture of AI systems.

I am in agreement with this article in The Guardian by Charlotte Blease, a researcher at Harvard Medical School and author of the forthcoming book Dr Bot: Why Doctors Can Fail Us – and How AI Could Save Lives. Her point is that while AI might not be perfect, neither are doctors, and it can definitely speed up and help with diagnoses.

Personally, I’m now 7.5 months into trying to get a diagnosis for symptoms I started experiencing on January 15th of this year. It’s the nature of medicine to do tests and rule things out, but a combination of NHS resourcing and (some) health professionals' attitudes have made the experience sub-optimal.

For example, when presenting at A&E thinking I was having a heart attack, I had an echocardiogram followed by a doctor taking me into a side room and somewhat aggressively asking me why I was there. When a GP found out that I’m vegetarian, he suggested I start eating meat — even though it had nothing to do with the issue at hand. I’ve been waiting almost six weeks for a urine test result.

So, I’ve been supplementing the information I get from health professionals and via the NHS app with asking LLMs (usually Perplexity or Lumo) about what my test results might mean, and what else might be causing the symptoms. It’s given me a list of things to ask doctors, nurses, and those performing tests, and it’s helped me know what kinds of things I should be avoiding — other medications, supplements, food, drinks, activities, etc.

Obviously, this needs to be done with huge amounts of safeguards and guardrails in place. But not to use technology which may prove useful at scale? I think that’s irresponsible.

Given that patient care is medicine’s core purpose, the question is who, or what, is best placed to deliver it? AI may still spark suspicion, but research increasingly shows how it could help fix some of the most persistent problems and overlooked failures – from misdiagnosis and error to unequal access to care.

As patients, each of us will face at least one diagnostic error in our lifetimes. In England, conservative estimates suggest that about 5% of primary care visits result in a failure to properly diagnose, putting millions of patients in danger. In the US, diagnostic errors cause death or permanent injury to almost 800,000 people annually. Misdiagnosis is a greater risk if you’re among the one in 10 people worldwide with a rare disease.

[…]

Medical knowledge also moves faster than doctors can keep up. By graduation, half of what medical students learn is already outdated. It takes an average of 17 years for research to reach clinical practice, and with a new biomedical article published every 39 seconds, even skimming the abstracts would take about 22 hours a day. There are more than 7,000 rare diseases, with 250 more identified each year.

In contrast, AI devours medical data at lightning speed, 24/7, with no sleep and no bathroom breaks. Where doctors vary in unwanted ways, AI is consistent. And while these tools make errors too, it would be churlish to deny how impressive the latest models are, with some studies showing they vastly outperform human doctors in clinical reasoning, including for complex medical conditions.

Source: The Guardian

Image: Alan Warburton, Better Images of AI