Informed i’s Weekly Business Insights
FREE weekly newsletter | sharing knowledge briefs from TOP TEN BUSINESS MAGAZINES, to keep you ‘relevant’…| Since 2017 | Week 449 | April 24-30, 2026 | Archive

Health-care AI is here. We don’t know if it actually helps patients.
By Jessica Hamzelou | MIT Technology Review | April 24, 2026
2 key takeaways from the article
- AI is being used, increasingly, in hospitals. A growing number of studies suggest that many of these tools can deliver accurate results. But there’s a bigger question here: Does using them actually translate into better health outcomes for patients? We don’t yet have a good answer. That’s what Jenna Wiens, a computer scientist at the University of Michigan, and Anna Goldenberg of the University of Toronto argue in a paper published in the journal Nature Medicine this week.
- The problem is that many providers aren’t rigorously assessing how well they actually work. But even a tool that is “accurate” won’t necessarily improve health outcomes. AI might speed up the interpretation of a chest x-ray, for example. But how much will a doctor rely on its analysis? How will that tool affect the way a doctor interacts with patients or recommends treatment? And ultimately, what will this mean for those patients? Like many other areas AI is shaping, we have more questions than answers.
(Copyright lies with the publisher)
Topics: AI and Health Care
Click for the extractive summary of the articleExtractive Summary of the Article | Listen
According to the author he doesn’t need to tell you that AI is everywhere. Or that it is being used, increasingly, in hospitals. Doctors are using AI to help them with note-taking. AI-based tools are trawling through patient records, flagging people who may require certain support or treatments. They are also used to interpret medical exam results and x-rays.
A growing number of studies suggest that many of these tools can deliver accurate results. But there’s a bigger question here: Does using them actually translate into better health outcomes for patients? We don’t yet have a good answer.
That’s what Jenna Wiens, a computer scientist at the University of Michigan, and Anna Goldenberg of the University of Toronto argue in a paper published in the journal Nature Medicine this week. Over the last few years, she says, it’s as though “a switch flipped.” Health-care providers not only appear much more interested in the promise of these technologies but have also begun rapidly deploying them.
The problem is that many providers aren’t rigorously assessing how well they actually work. Take “ambient AI” tools, for example. Also known as AI scribes, they “listen” to conversations between doctors and patients and go on to transcribe and summarize them. Multiple tools are available, and they are already being widely adopted by health-care providers. That’s all well and good. But what about patient health outcomes? “[Researchers] have evaluated provider or clinician and patient satisfaction, but not really how these tools are affecting clinical decision-making,” says Wiens. “We just don’t know.” The same holds true for other AI-based technologies used in health-care settings. Some are used to predict patients’ health trajectories, others to recommend treatments. They are designed to make health care more effective and efficient.
But even a tool that is “accurate” won’t necessarily improve health outcomes. AI might speed up the interpretation of a chest x-ray, for example. But how much will a doctor rely on its analysis? How will that tool affect the way a doctor interacts with patients or recommends treatment? And ultimately, what will this mean for those patients? The answers to those questions might vary between hospitals or departments and could depend on clinical workflows, says Wiens. They might also differ between doctors at various stages of their careers. “We like things that save us time, but we have to think about the unintended consequences of this,” she says.
“I do believe in the potential of AI to really improve clinical care,” says Wiens, who stresses that she doesn’t want to stop the adoption of AI tools in health care. She just wants more information about how they are affecting people. “I have to believe that in the future it’s not all AI or no AI,” she says. “It’s somewhere in between.”
show less
Leave a Reply
You must be logged in to post a comment.