In his latest blog, Bobby Edwards, BridgeHead’s Principal Solutions Consultant for HealthStore® for North America, takes a look at AI following a recent article he read. It prompted him (pardon the pun) to explore the thorny subject of whether we can and should trust AI in healthcare. But he does this through a unique lens that references The Terminator and Chicken Little (as only Bobby can). You’ll have to read on to find out more.

“AI has a trust problem”

These are not my words. That credit goes to a recent article I read in the Boston Globe this week titled “AI-based Mammography is Here, and it Has a Trust Problem” by Katie Palmer. It’s an interesting read. Beyond some concerning statistics about the ever-shrinking pool of physicians and the growing patient population, it highlights a significant bias against computer-aided diagnostics (CAD) and artificial intelligence (AI).

In the article, one study referenced a situation where there were differing results following a radiologist-AI paired workflow versus a workflow with two radiologists. Interestingly, there was an 18% drop in patient follow-up call-backs when a radiologist-AI pair flagged a case. Yet, when two radiologists flagged a case, the follow-up call-backs rose dramatically to 57%. What does this tell us? That there is an apparent, innate distrust of AI-generated results!

And, of course, this immediately made me think of Skynet (yes, the fictional artificial neural network and superintelligence system that serves as the main antagonist of the Terminator film franchise) – and, for some reason, Chicken Little (yes, that children’s animated character courtesy of Walt Disney)!

So, in an ironic twist, I decided to conduct a little experiment… I asked ChatGPT to “create a story about AI in medicine in the style of Chicken Little”. Admittedly, I have tweaked the output slightly, but this is more or less what was generated from my prompt… and I think it’s interesting reading.

“The Skynet is Coming! The Skynet is Coming!”

[To get the full effect of this story, you have to imagine the following narration in Chicken Little’s voice]

Oh dear! Oh dear! Have you heard the latest, friends? The machines are coming! The algorithms are learning! The Skynet is Coming and medicine will never be the same!

It all started with a humble chatbot, you see. Just a few polite words of advice here and there. “What symptoms do you have?” it asked sweetly. “Let me tell you what Dr. Watson has to say about that,” it chirped. But now – oh, now – there are AI algorithms reading X-rays faster than radiologists, predicting strokes before they happen, and (some say) planning to take over the entire healthcare system!

Doctors beware! Nurses, hide your charts!

The AI is coming to steal your job, your stethoscope, and perhaps even your coffee break! They call it “machine learning,” but I call it “Skynet rising!” They say these AIs can see patterns we mortals can’t, but what if they see too much? What if they start diagnosing everything as “Patient: Terminated!”?

Oh, the marvels of these robot doctors! They can scan gigabytes of data in seconds, spot a malignant mole from a mile away, and even predict when the next flu season will hit. But what about compassion? What about the warm bedside manner of Patch Adams? Will these AI bots offer a reassuring pat on the shoulder when you’re nervous about your colonoscopy?

The Skynet is coming! And it’s falling hard!

Every week, there’s a new study showing how AI is better at this or faster at that. They promise to reduce errors and make healthcare more efficient – like an army of tireless nurses who never call in sick. But I can’t help but squawk and flutter. After all, wasn’t it the promise of “efficiency” that first gave us Skynet in the movies?

Will the sky actually fall?

Well, maybe not. Maybe, just maybe, these algorithms will become our allies, not our overlords. They’ll handle the boring stuff – sorting images, analyzing data, and alerting doctors to subtle signals – while we humans do what we do best: caring, comforting, and connecting.

But I’m still watching the sky, friends. I’m still listening for the whir of the robot’s gears in the hallway. Because today it’s AI-assisted charting, tomorrow it’s AI-powered diagnosis, and the day after? Who knows – maybe it will be Skynet.

Stay alert! Stay vigilant! And above all – don’t let the robots take your stethoscope without a fight.”

I thought ChatGPT did a pretty good job, right?

Should we trust AI in healthcare?

So, full disclosure, I too have an AI trust issue. Anytime I ask AI for something, I make it a point to review it. We’ve all seen it make mistakes, just like any of us. But 90% of the time, it’s fairly accurate. In medicine, that’s not good enough! 90% accuracy still means 10% wrong – and that 10% could translate to a missed diagnosis with potentially serious consequences.

That said, the sheer volume of data an AI algorithm can process, and the level of detail it can detect, are simply unmatched by humans. I’m not disparaging radiologists – on the contrary, I have tremendous respect for their ability to detect subtle nuances in images; and I’ve witnessed it firsthand. But when AI works with the actual digital values of pixels rather than just the video representation on a monitor, I can’t help but consider the impact of extrapolation and interpolation on a human’s ability to detect subtle changes.

Consider what a single MR image looks like when displayed on a 5 megapixel monitor. A 512 x 512 image has about 256,000 pixels. When this is scaled up by a factor of 20, that’s a lot of extrapolated data! An overmagnified image can result in a failed quality assurance test. Consider the implications… your scanners are brought offline resulting in lost revenue, and, most importantly, the impact on your patients! And if you think that doesn’t happen, I’ve seen it… and I’ve had to demonstrate this to the person who was validating an MRI against a routine phantom test.

Now, to be clear, this doesn’t mean I want AI making diagnoses alone. What I’m getting at is this: AI is a powerful tool. Assistive AI tools in medicine are tightly controlled and highly specific in their functions, but their results still need to be verified by a human. Like any specialized tool, when used properly, AI can save time and improve outcomes – ideally the best-case scenario.

Reconciling the AI trust issue

I spoke to a doctor recently who said their biggest concern is the potential for users to rubber-stamp AI findings without proper verification. I agree completely – it’s a real concern. AI should absolutely become part of the diagnostic process but, in my opinion, it must be overseen by a human. With the decreasing number of physicians, anything that improves diagnostic accuracy and frees up more time for patient care or personal well-being benefits everyone.

Spoiler alert: as Chicken Little found, the sky was not in fact falling. Everything turned out just fine. When used properly, AI can be a game-changer in healthcare (as many will already attest) – with the potential to streamline imaging operations, enabling radiologists to focus on complex cases – the goal being to improve patient outcomes.

Image of Bobby Edwards, Principal Solutions Consultant – HealthStore®, BridgeHead Software

 

Bobby Edwards joined BridgeHead Software in October 2011 and brings more than 25 years of extensive experience in healthcare and data management. In his current role as Principal Solutions Consultant – HealthStore, he is entrusted with the responsibility of actively engaging with hospitals, listening to their unique challenges, and devising innovative solutions to address complex data management issues. His goal is to enhance healthcare delivery and positively impact people’s lives through his work.

 

Bobby has held senior positions within prominent technology and development organizations, including eMed Technology and Iron Mountain, before joining BridgeHead Software.

 

Interested in learning how a future-proof Clinical Data Repository can help a pave the way for a smarter, data-driven approach to care?