Saturday, August 30, 2025
Medicine, AI and You – Risks and Benefits
Medicine, AI and You – Risks and Benefits
Studies have shown that over-reliance on AI – increasingly blind reliance on AI solutions – produces inferior results. But the combination of AI with intelligent human interaction shows how well the process can work. But remember, AI is only as good as the algorithms and data fed into the system, plus what that system is allowed to access “out there,” if permitted to explore. Add to this complexity is since AI can be very self-directing, this leads many to the conclusion that we do not completely understand the universe we are creating. It’s more than the old, “garbage in, garbage out,” if you are not completely aware of what goes in… the risks of generative AI, particularly worrying in medical diagnosis and treatment.
But for the medical community, living without AI can generate second-rate medical practices. AI’s ability to find pathways to new treatments, vaccines and the resulting insights… able to conduct virtual experiments over thousands, even millions, of models without risking attempting such solutions in the real world first, is amazing. The ability to analyze complex data (e.g., CT Scans, MRI results, even simple x-rays) and compare the input to millions of comparable screenings where specific diseases or anomalies have been identified (both in living scans and post-mortem analyses), allows the tiniest trace of the identifying characteristic, ones that even seasoned radiologists might easily miss, to diagnose diseases at the earlier, and most treatable phase. In short, AI can save a lot of time… avoid dead-end experiments... “possible, best guess” treatments that go nowhere… and hence lives. Robotic surgery (I’ve had several) increases accuracy, reducing the size of incisions and allowing for instant diagnosis of observations along the way… and AI only makes those processes safer and more efficient.
The aggregation of data, the creation of “basic” software which can teach itself from there, the sophistication of the computing power needed to process it all, and the necessary human training on how to use this technology and maximize both accuracy and the treatment solutions are very expensive. Tech players in this universe are spending billions of dollars to make this all work. That those able to buy and/or create such technology are wildly well funded, puts our future in the hands of the billionaires and mega-corporations able to afford this escalating investment. When this is combined with the GOP agenda of defunding government-supported research… pushing our future into the hands of profit-seekers and those attempting to define and dominate the marketplace, often in defiance of antitrust laws, should worry us all. As Trump’s Big Beautiful Big Bill attests, our current governmental vectors favor the mega-rich (individuals and corporations) at the expense of everyone else. When this applies to AI and medicine, we should be suspicious.
The mergers and acquisitions in this medical AI space track the unbelievably massive valuations and dollars taking place among the biggest of the big boyz. We can note that when smaller companies create more effective interfaces, software and database aggregation, the behemoths in medical AI step in to buy them, often taking these new capabilities out of the competitive marketplace.
One of the primary writers in this space is Ian Krietzberg, who writes for Puck.com. In his July 16th contribution, he notes that not only are sophisticated researchers embracing AI but too many amateurs are engaged in self-diagnosis, a practice destined to increase as Medicaid and other programs face cutbacks, as small clinics and hospitals close and as medical costs continue to skyrocket given our current political push towards more corporate profitability:
“Nearly half of clinicians are now using A.I. for their work. Patients are turning to ChatGPT to self-diagnose mysterious ailments. And everyone from the chief innovation officer of Boston Children’s Hospital to R.F.K. Jr. is excited about the revolution unfolding in plain sight. What could go wrong? … Long before ChatGPT infiltrated classrooms and became an obsession at cocktail parties, Boston Children’s Hospital embarked on what Dr. John Brownstein, its chief innovation officer, described as an ‘A.I. journey.’ For years, Brownstein told me, the hospital had been using machine learning in data-rich environments—like radiology, pathology, or the intensive care unit—to generate ‘predictions’ about patient outcomes. Then came the generative A.I. explosion. Now, Brownstein said, his team is anticipating that A.I. is ‘going to be part of the fabric of almost all the technologies we use in the hospital.’ For many people in the A.I. field, the integration with medicine represents a potential holy grail.
“Obviously, these technologies are still error-prone, and the stakes are much higher when you’re incorporating A.I. into potentially life-or-death healthcare decisions, rather than, say, enabling Gemini in your Gmail. But physicians are finding early success with A.I. tools, and the rate of adoption is steadily ticking up: According to Elsevier’s fourth-annual ‘Clinician of the Future’ report, which was released today [7/16], 48 percent of clinicians had used A.I. for work in 2025, nearly double the 26 percent reported the year before, and more than triple the figure from the year before that. The 2,000 or so physicians who responded to the survey described their primary use cases for A.I. as identifying drug interactions, analyzing medical images, and providing a patient’s medication summary.
“This rapid adoption curve, Brownstein said, can be attributed in part to the industry’s seeming openness to this technology. At Boston Children’s, 30 percent of the hospital’s workforce has already started using A.I., although mostly via ‘low-risk’ applications, like administrative tools. The hospital was also one of the earlier adopters of (controversial) ambient listening tools, which use A.I. to auto-transcribe patient-doctor visits, and has partnered with OpenAI to advance their work on the diagnosis of rare diseases. ‘We’ve been very careful about the deployment of these tools, recognizing that some come with more risk than others,’ Brownstein said, adding that the hospital has also started using physician-facing tools, at least in part, for care guidance”—a step toward wide-scale, predictive, personalized healthcare.
“Still, plenty of doctors remain cautious. In Elsevier’s 2024 survey, 85 percent of clinicians said that A.I. could cause critical errors, and 93 percent were worried about misinformation. In this year’s survey, only 40 percent of clinicians claimed that A.I. could be trusted to assist with clinical decision-making, and only 30 percent said their institutions were providing adequate training—an issue that Brownstein acknowledged as an impediment to adoption. ‘At the end of the day, whoever’s using them has to sign off and take responsibility for whatever the output is,’ he told me. ‘It still resides with the clinician to provide that consideration. Yes, there’s a future world where a lot of patients are going to turn directly to these tools, but that’s not where we are.’”
In the end, there are a pile of medical, political and economic risks that appear to be unavoidable. Compounding this maze of issues is the raw complexity of AI, the “unknowability quotient,” and the proclivity of most Americans to outsource their opinions, when they do not understand the variables, to politicians who may be equally uninformed but are most willing to use the confusion to manipulate their constituents.
I’m Peter Dekom, and Americans do not need to dwell in the small technical details of AI to make informed decisions, but there is enough basic information to demystify the process to an understandable level.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment