LIM Center, Aleje Jerozolimskie 65/79, 00-697 Warsaw, Poland
+48 (22) 364 58 00

AI Just Diagnosed You—Now What? The Rise of Superhuman Doctors Made of Code

AI Just Diagnosed You—Now What? The Rise of Superhuman Doctors Made of Code

AI Just Diagnosed You—Now What? The Rise of Superhuman Doctors Made of Code

A New Kind of Second Opinion

A boy in the U.S. suffered chronic pain for three years. He saw 17 different doctors and still had no diagnosis. In desperation, his mother typed his symptoms into an AI chatbot. The reply came in seconds: spina bifida occulta – a rare spinal condition. A specialist later confirmed this was the correct diagnosis nihrecord.nih.gov. It sounds like science fiction: an artificial intelligence (AI) succeeding where human experts failed. But this real-world case is a striking sign of how AI is beginning to transform medical diagnostics. From reading X-rays and CT scans to spotting elusive rare diseases, these “superhuman doctors made of code” are rapidly becoming trusted partners in healthcare.

Healthcare AI is advancing on multiple fronts at once. Powerful algorithms can now flag tumors on medical images that radiologists overlook nihrecord.nih.gov. Chatbots armed with vast medical knowledge can draft answers to patient questions that are clearer and even more empathetic than those from physicians, according to one study health.harvard.edu. In a hospital in Taiwan, an AI system monitoring routine heart tests saved lives by alerting doctors to subtle danger signs that would normally be missed heart.org heart.org. Each month brings new headlines of algorithms matching or beating human clinicians at diagnostic tasks once thought uniquely human. The current state of AI in medicine is both exciting and nuanced: remarkable successes tempered by cautionary tales and practical challenges. Let’s explore how AI is diagnosing illness today, the latest breakthroughs pushing it forward, and what it all means for patients, doctors, and the future of healthcare.

AI Enters the Exam Room: From Radiology to Dermatology

In 2025, AI has quietly woven itself into many areas of medicine – especially fields that rely on analyzing images and patterns. Radiology is a prime example. Every year, radiologists in the U.S. interpret over 40 million mammograms. Increasingly, they’re getting help from algorithms trained to detect cancers. The Food and Drug Administration (FDA) has cleared several AI systems to analyze mammograms, and large radiology networks are deploying them at scale. For instance, RadNet, a company with 400+ imaging centers, runs its AI on 600,000 mammograms annually and recently acquired another AI firm whose tools reach 17% of U.S. radiology practices statnews.com. Another vendor, Lunit, reports its AI was used in over one million breast exams worldwide last year statnews.com. These tools act as tireless second readers – highlighting suspicious areas for doctors to review. Early trials in Europe suggest some AI can spot more breast tumors than radiologists alone, prompting hopes of improved cancer detection statnews.com. The UK’s National Health Service is even testing whether an AI can safely replace one of the two human readers traditionally required for mammogram screening, to ease workforce shortages thelancet.com statnews.com.

Other imaging fields tell a similar story. In pathology, where doctors examine tissue slides for cancer and diseases, AI is accelerating a shift to digital diagnostics. PathAI – a Boston-based pioneer in AI-powered pathology – entered a major partnership with Quest Diagnostics in 2024. Quest is acquiring PathAI’s laboratory and integrating AI across its national network of pathology labs fiercebiotech.com fiercebiotech.com. “This transaction will enable Quest to dramatically ramp our capabilities in AI and digital pathology,” said a Quest executive, calling it a “major turning point for digital pathology adoption in the U.S.” fiercebiotech.com fiercebiotech.com. With AI, digitized biopsy slides can be analyzed in seconds for cancer markers, allowing pathologists to catch more details and even get remote second opinions. Similarly, in ophthalmology, AI systems are screening for eye diseases like diabetic retinopathy by examining retinal photos – often in primary care clinics or via smartphone apps – a task once requiring specialist doctors. The first autonomous AI diagnostic device approved by the FDA was for diabetic eye screening, and such tools are now used in clinics to prevent blindness by catching early disease. In dermatology, algorithms can classify skin lesions from photos with impressive accuracy. Tech giants and startups alike have rolled out dermatology apps: Google, for example, piloted an AI “Derm Assist” tool that can recognize hundreds of skin conditions from images, earning a CE medical device mark in Europe forbes.com onlinelibrary.wiley.com. While regulators have been cautious about direct-to-consumer skin diagnosis, dermatologists are starting to use these assistants to improve telemedicine triage of rashes and moles.

Beneath the hype, many of these AI systems are narrow specialists – excellent at a specific task under certain conditions. An AI might read a chest X-ray or flag a pneumonia on a CT scan, but it doesn’t “think” broadly about the patient’s overall condition. Still, their performance in narrow domains is reaching or exceeding human levels. Dr. Eric Topol, a leading cardiologist and AI scholar, notes that “every type of medical scan could get an AI interpretation that would be quite good, comparable and complementary to those from clinicians.” nihrecord.nih.gov In fact, machine vision is revealing new diagnostic clues that doctors never knew to look for. “Machine eyes will see things that humans will never see. It’s actually quite extraordinary,” Topol said nihrecord.nih.gov. For example, researchers found that AI analysis of routine retinal photographs can predict a patient’s risk of heart disease, kidney disease, and even neurological illnesses – simply by detecting subtle vascular patterns in the eye nihrecord.nih.gov. “Who would’ve ever thought that through the retina – the gateway to the body – we could detect all of these things,” Topol marveled nihrecord.nih.gov. AI is turning out to be a super-observant diagnostician, finding signals in medical data that human doctors have overlooked.

When Algorithms Outsmart the Experts

Recent developments in 2024–2025 have supercharged the notion that AI might outperform human doctors in some diagnostic tasks. Nowhere was this more dramatically shown than in a study published on a preprint server by a team at Microsoft. They introduced an experimental AI system called “AI Diagnostic Orchestrator (MAI-DxO)”, which tackles complex medical cases by using multiple AI agents that debate and reason through a case – much like a panel of expert doctors brainstorming a diagnosis. Microsoft tested MAI-DxO on nearly 300 challenging clinical case studies from the New England Journal of Medicine and compared its accuracy to 21 experienced physicians. The result was astounding: the AI got the right diagnosis 85% of the time, whereas the doctors did so only 20% of the time time.com time.com. In other words, the AI was four times more accurate than the human physicians on these tough cases. “The four-fold increase in accuracy was more than previous studies have shown,” observed Dr. Eric Topol, who was not involved in the work but reviewed the results. He called the jump in performance “really big” time.com. Just as impressively, the AI reached correct diagnoses while ordering 20% fewer tests on average, potentially reducing costs time.com.

Microsoft’s AI isn’t the only one hitting diagnostic milestones. Google’s DeepMind unit has been working on its own AI doctor system. In early tests, a prototype of Google’s conversational diagnostic AI outperformed physicians in diagnosing hypothetical primary care cases, scoring ~59% accuracy vs. 33% for human doctors in one trial time.com. And large language models (LLMs) like GPT-4 have shown they can pass medical licensing exams and answer clinical questions at a high level. In a 2024 clinical trial published in JAMA, researchers found that a GPT-4 based AI could diagnose difficult medical cases more accurately than physicians working alone news-medical.net news-medical.net. The study pitted groups of doctors against the AI on a set of challenging patient scenarios. Remarkably, the AI by itself achieved higher diagnostic accuracy than the physicians, even though those doctors had access to medical resources and, in some groups, the AI’s assistance news-medical.net. The takeaway was not that AI should replace doctors, but that current-generation LLMs have reached a level where – under certain conditions – they can simulate expert diagnostic reasoning very effectively news-medical.net news-medical.net.

These breakthroughs have made headlines, yet they come with important caveats. Microsoft’s impressive MAI-DxO system, for example, is still a research project not deployed clinically time.com. Its dazzling performance was achieved by combining multiple AI models and carefully structuring the problem – something not easily replicated in a real emergency room without further testing time.com. And the JAMA study noted that giving doctors access to an AI tool didn’t improve their performance; interestingly, the AI did best on its own news-medical.net news-medical.net. This suggests we still need to learn how doctors and AI can work together optimally. Indeed, a Harvard-led study in 2024 found that when radiologists use AI assistance for reading X-rays, some doctors get better but others get worse hms.harvard.edu hms.harvard.edu. “We find that different radiologists react differently to AI assistance – some are helped while others are hurt by it,” said Dr. Pranav Rajpurkar, the study’s co-senior author hms.harvard.edu. Less experienced radiologists didn’t uniformly benefit from AI, and even seasoned doctors were sometimes misled by AI suggestions. This variability underscores that AI’s superhuman abilities can amplify human performance or errors, depending on how it’s applied. As one researcher put it, we need “carefully calibrated approaches” so that these tools boost clinicians rather than inadvertently undermine them hms.harvard.edu hms.harvard.edu.

Nonetheless, the trajectory of improvement is clear. AI diagnostics are rapidly moving from narrow tasks to broader clinical reasoning. “Today, these models are having fluent conversations at very high quality, asking the right questions…suggesting the right tests at the right time,” noted Mustafa Suleyman, CEO of Microsoft AI, referring to the new generation of medical AI like MAI-DxO time.com time.com. His enthusiasm comes with a bold vision: “It gives us a clear line of sight to making the very best expert diagnostics available to everybody in the world at an unbelievably affordable price point.” time.com In other words, if an AI doctor can achieve top-tier diagnostic skill and run cheaply in the cloud, it could become like a universally accessible “Dr. House” – solving medical mysteries for patients anywhere. That vision may still be on the horizon, but the recent breakthroughs of 2024–25 have brought it significantly closer.

New Tools of the Trade: Meet the AI Doctor’s Kit

The rise of AI in diagnostics isn’t due to a single magic machine, but a convergence of tools and platforms each tackling different medical puzzles. It’s worth getting to know some of the major players:

  • Medical LLMs (Chatbot Doctors): One category is large language models tuned for medicine. Google’s Med-PaLM 2 and OpenAI’s various GPT-4-based medical models (often dubbed “Med-GPT”) are examples. These are AI chatbots that have digested medical textbooks, journal papers, and clinical guidelines. They can answer questions posed by doctors or patients, summarize medical literature, and even suggest possible diagnoses from a description of symptoms. Google’s Med-PaLM, for instance, was trained on a blend of medical exam questions and consumer health queries. In testing, it achieved physician-level scores on medical exam benchmarks pharmaphorum.com pharmaphorum.com. It’s designed to generate “safe and helpful answers” for healthcare professionals and patients alike pharmaphorum.com pharmaphorum.com. However, these chatbots still make mistakes. An evaluation found Med-PaLM sometimes retrieved incorrect facts or reasoned wrongly more often than human doctors would pharmaphorum.com pharmaphorum.com. The upside is that with continued refinement – such as better prompt tuning and the use of retrieval tools – their accuracy is improving rapidly pharmaphorum.com. Microsoft, meanwhile, is exploring “chain-of-thought” LLMs (like the MAI-DxO) that break a problem into steps and even use multiple AI agents that consult each other, mimicking how a team of doctors might confer crescendo.ai crescendo.ai. These generative AI tools are not yet autonomous diagnosticians on their own, but they are already being deployed in pilot programs to support doctors with decision-making and paperwork (drafting notes, letters, etc.). For example, clinicians at the Mayo Clinic have tested GPT-based assistants to summarize patient visits and draft responses to patient messages, saving significant time on documentation epicshare.org epicshare.org.
  • Imaging AI (Eyes that never blink): In radiology, numerous specialized AI platforms have emerged. Aidoc and Viz.ai are two leading examples already in use at hospitals. Aidoc’s software scans medical images (like CT scans) in real time and alerts radiologists if it finds a suspected abnormality – say a brain bleed or pulmonary embolism – ensuring urgent cases aren’t overlooked in a busy ER. Viz.ai similarly analyzes CT scans for stroke, flagging images of blocked blood vessels so that patients can get faster treatment. These tools have earned FDA clearance and are being adopted as “safety nets” in emergency and trauma care. Annalise.ai is another platform (used in the UK and elsewhere) that can detect over 100 findings on chest X-rays – from lung nodules to broken ribs – acting as an ever-vigilant radiologist’s assistant annalise.ai theguardian.com. In pathology, we have Paige AI and PathAI developing algorithms that can grade tumors on slides or find tiny cancer foci that a microscope exam might miss. Dermatology AIs – such as algorithms by Stanford researchers or Google’s Derm Assist – can classify skin lesion images as accurately as dermatologists for the most common cancers forbes.com onlinelibrary.wiley.com. There are even smartphone apps that allow a user to photograph a mole and get an instant AI risk assessment (though regulators urge caution – these apps are not perfect and a doctor’s confirmation is still a must).
  • Specialty AI devices: Some AI tools are essentially medical devices with AI brains. In ophthalmology, for example, IDx-DR was the first autonomous AI approved to diagnose diabetic retinopathy from an eye scan without a human specialist research.google. Now, multiple systems can examine retinal images and output a diagnosis on the spot, which is invaluable in clinics where an ophthalmologist isn’t present. In cardiology, a compelling new tool is an AI-enhanced ECG (electrocardiogram). Researchers in Taiwan developed an AI that analyzes the squiggly lines of an ECG to predict if a patient is at high risk of dying in the next month heart.org heart.org. When this AI was tested in a clinical trial with 16,000 patients, doctors who received its alerts were able to intervene and reduce mortality by 31% among high-risk patients heart.org heart.org. The system caught subtle signs of trouble (like early heart failure or arrhythmias) that would not be obvious to a human reading the same ECG. That AI is now being deployed in 14 hospitals heart.org. We’re also seeing AI in medical devices like smart stethoscopes that can detect heart murmurs or algorithms that interpret ultrasound scans on the fly.
  • Multi-modal “AI Doctors”: The holy grail is an AI that can synthesize different data – images, lab results, patient history – the way a physician does. Early steps in this direction are being taken by companies like Google’s DeepMind with a system code-named Med-PaLM Multimodal or “Med-Gemini.” In late 2024, Google announced that its next-gen model can handle not just text but also medical images and even biosignals research.google blog.google. It showed this AI chest X-rays or retinal scans combined with patient questions, and the AI provided answers and recommendations, demonstrating a capacity to both see and talk. This points toward a future “AI physician assistant” that you could feed a mix of clinical data and it would output an assessment. Likewise, startups are developing AI that listens to doctor-patient conversations (audio) while looking at the patient’s medical record (text) and then produces suggestions or fills out paperwork. It’s a mix of speech recognition, language understanding, and medical knowledge – but the pieces are coming together.

From DeepMind’s Med-PaLM to PathAI’s pathology platform to Qure.ai’s chest X-ray tool, these technologies form an expanding kit of AI diagnostic instruments. Importantly, most of them are designed to assist, not replace, the human clinician. They often function in the background – scanning images or data and highlighting issues – while the doctor remains the decision-maker who interprets the AI’s findings in context. As one radiologist quipped, AI won’t take your job, but a radiologist using AI might take the job of one who doesn’t. This sentiment was echoed by Dr. Jesse Ehrenfeld, President of the American Medical Association, who said in 2024: “AI is not going to replace doctors – but doctors who use AI will replace those who don’t.” ama-assn.org. In other words, proficiency with these new AI tools is becoming part of the modern medical bag.

Successes and Setbacks: Real-World Outcomes

For all the promise of AI in diagnosis, how is it actually performing in real hospitals and clinics? There have been some remarkable success stories. We’ve already mentioned the Taiwan trial where an AI-guided ECG alert prevented deaths heart.org. Another concrete win is in breast cancer screening: a large study in Sweden with over 80,000 women found that radiologists aided by AI detected more cancers and did so faster, compared to either alone nihrecord.nih.gov. In that trial, AI was used as a second reader for mammograms, and it helped catch cancers that one human reader might miss while also reducing the workload (fewer images required double-checking) nihrecord.nih.gov. This is a template for AI-human collaboration that improves care. Similarly, the UK’s NICE (National Institute for Health and Care Excellence) recently evaluated AI tools for reading X-rays and gave the green light for the NHS to deploy several for fracture detection. Overlooked fractures are among the most common errors in emergency departments – up to 10% of breaks are initially missed theguardian.com. NICE found that using AI as an “add-on” to review X-rays improves fracture detection without increasing misdiagnoses theguardian.com. Mark Chapman, NICE’s HealthTech director, noted the AI could “spot fractures which humans might miss given the pressure and demands [on] these professionals” theguardian.com. Four AI platforms (with names like BoneView and Rayvolve) are now approved to help UK doctors avoid missing broken bones, at an estimated cost of just £1 per scan theguardian.com theguardian.com. In resource-strapped health systems, such efficiency gains are extremely attractive.

AI has also proven its worth in some high-stakes emergency diagnoses. Stroke, for example, requires rapid detection of brain clots. AI software that flags stroke on brain scans has cut treatment times in some hospitals by alerting the stroke team as soon as the patient is scanned, shaving precious minutes when “time is brain.” In one UK trial, an AI was twice as accurate as junior doctors at interpreting stroke CT scans and could even estimate when the stroke occurred by the brain changes – critical for deciding treatments weforum.org weforum.org. In colon cancer prevention, AIs used during colonoscopy can highlight polyps (tiny precancerous growths) that physicians might otherwise overlook. These polyp-detecting AIs, now approved in the US, have been shown to modestly increase polyp detection rates, potentially heading off cancers. And in fields like neurology, AI is aiding the detection of subtle findings – such as epilepsy-causing brain lesions – that neuroradiologists sometimes miss. A recent UK study reported an AI tool found 64% of hidden epilepsy lesions that had been missed on MRIs, though it also noted that combining AI findings with expert human review was the best approach weforum.org weforum.org. As the lead researcher explained, finding these tiny abnormalities can be “like finding one character on five pages of solid black text” – AI can find many that humans miss, “but a third are still really difficult to find.” weforum.org weforum.org. This reinforces that AI + human together often outperform either alone.

Yet, there have been failures and limitations that offer a reality check. The most infamous might be IBM’s Watson for Oncology – an early attempt (circa 2015–2017) to apply AI to cancer treatment recommendations. Touted as a “Dr. Watson” supercomputer, it fell far short. Internal audits revealed Watson often gave “unsafe and incorrect treatment recommendations” for cancer patients statnews.com. The problem: it wasn’t actually analyzing real patient data but rather using a small set of hypothetical cases and expert rules, which proved inadequate statnews.com. By 2018, major hospitals like MD Anderson quietly shelved Watson due to poor results. The episode was a $4 billion lesson in overhyping AI without evidence henricodolfing.com. More recently, some direct-to-consumer apps have raised concern. For example, smartphone dermatology apps claim to identify skin cancer from a photo, but independent tests found some missed serious melanomas or wrongly labeled benign lesions as dangerous dermatologytimes.com. Regulators have warned that without robust clinical validation, such apps could do more harm than good dermatologytimes.com dermatologytimes.com. Even highly accurate AI can run into issues if used in the wrong context. An algorithm trained to interpret scans at one hospital may perform poorly at another if the equipment or patient population differs – a challenge of generalizability. Bias is another worry: if an AI learns from data that mostly comes from one demographic group, it might not work well for others. A well-publicized example was an AI used by some U.S. hospitals to prioritize access to care; it was found to systematically underestimate the illness of Black patients relative to white patients, because it used healthcare spending as a proxy for need (and historically less was spent on Black patients) learn.hms.harvard.edu nature.com. The result was a racial bias in care decisions, prompting an overhaul of that system.

False positives and “hallucinations” also pose limits. AI might flag disease where there is none, leading to unnecessary follow-ups. Or a chatbot might fabricate a medical reference or confidently give wrong advice – the phenomenon of AI hallucination. Doctors using generative AI must be on guard for answers that sound authoritative but are actually incorrect. “These models can generate incorrect or misleading results…called AI hallucinations or confabulations,” Dr. Topol cautioned in a lecture nihrecord.nih.gov. He also noted they can mirror human biases present in training data nihrecord.nih.gov. The medical community is therefore moving carefully: encouraging innovation, but also demanding rigorous validation. No patient wants to be misdiagnosed by a black-box algorithm, and no clinician wants to base a treatment on flawed AI output.

The sweet spot seems to be using AI as an assistive tool under human oversight. In successful deployments, AI takes over certain tedious or highly technical tasks – reading thousands of scans, monitoring vitals 24/7, sifting old records – and alerts the human clinicians to what needs their attention. The doctor, with their broader context and experience, makes the final call. For instance, in the heart ECG trial, the AI didn’t treat anyone itself; it alerted doctors, who then decided how to investigate and intervene heart.org heart.org. Those doctors were able to act faster and on patients they might have otherwise deemed low-risk, thanks to the AI’s heads-up. Many experts stress that this human-AI teamwork is the model to strive for. “Combining AI’s findings with human oversight and expertise has the potential to speed up both diagnosis and cure,” as the epilepsy researchers wrote weforum.org. In practice, this could mean AI handles the rote work and preliminary analysis, and physicians focus on verification, patient communication, and the complex decision-making that algorithms aren’t ready to handle alone.

Ethics, Bias and the Legal Maze

The spread of AI in diagnostics brings a host of ethical and regulatory questions. Medical decisions can be life-and-death – so who is responsible if an AI makes a mistake? Can a patient sue an algorithm manufacturer for a missed diagnosis, or is the liability on the doctor who relied on the tool? Regulators around the world are grappling with these issues. In the United States, the FDA regulates many diagnostic AIs as medical devices. By late 2024, the FDA had authorized over 1,000 AI-enabled medical devices for marketing statnews.com statnews.com – a staggering number that reflects how ubiquitous AI has quietly become inside medical software. The vast majority are for radiology imaging analysis news.mit.edu. Each of these had to demonstrate safety and effectiveness, usually by showing it performs as well as an existing standard of care. But newer AI models – especially adaptive algorithms that learn and change, or broad-use chatbots – don’t fit the old frameworks well. As of October 2024, no generative AI system (like ChatGPT) had been fully approved as a medical device hai.stanford.edu medicalfuturist.com. The FDA has been studying how to regulate “learning” AI and has issued draft guidelines, but it’s a moving target. Europe, through its upcoming AI Act, is also poised to impose requirements on high-risk medical AI, including transparency about how they work.

Explainability is a key principle regulators and ethicists discuss. Doctors are trained to “show their work” in reasoning through a diagnosis – shouldn’t AI be held to the same standard? If an algorithm flags a tumor on a scan, ideally it should highlight the area and perhaps indicate why it’s suspicious (its shape, texture, etc.). For simple image classifiers, this is feasible – e.g. heatmaps that show which pixels influenced the AI’s decision. But for complex neural networks and chatbots, their reasoning is often a black box even to their creators. This opaqueness can erode trust. It’s notable that Microsoft’s new diagnostic AI was designed with a feature to show its chain-of-thought and allow oversight. “MAI-DxO doesn’t just spit out an answer. It shows its work… available for real-time oversight by the human clinician,” explained Mustafa Suleyman time.com. This kind of transparency – if done well – could make it easier for doctors to trust AI recommendations and also catch errors (just as a physician double-checks a trainee’s reasoning). Explainable AI is an active area of research: from simplified surrogate models that approximate the AI’s logic, to forcing the AI to provide natural language justifications. In high-stakes use, some level of explanation or at least auditability will likely be demanded by regulators.

Bias and fairness in AI diagnostics remain a top ethical concern. As mentioned, AI can inadvertently perpetuate healthcare disparities if not carefully designed. If an AI has lower accuracy on women or on minority populations because of skewed training data, that’s unacceptable in medicine. Researchers are now testing AI algorithms on more diverse datasets and using techniques to mitigate bias – for example, balancing training data or explicitly encoding fairness constraints. One positive development: an AI model doesn’t inherently know a patient’s race or gender unless that information is in the data, so if properly tuned, it could actually act as an equalizer by focusing only on the relevant features of a disease. But in practice, things are tricky. A bizarre discovery in 2021 was that certain medical imaging AIs could accurately predict a patient’s race from an X-ray – something even radiologists cannot do by eye – and nobody is quite sure how the AI does it. This raised alarms that imaging AIs might pick up on race in subtle tissue patterns and then use race (perhaps as a proxy for other factors) in its predictions, potentially importing societal biases (like differing disease prevalence or treatment access) into its outputs. Ongoing research aims to ensure AI tools don’t disfavor any group. In the UK, an early study on an ambulance-triage AI specifically noted it made predictions “without bias” in who needs hospital transfer weforum.org weforum.org, a reassuring data point that was highlighted in reporting.

Privacy is another concern: AI models are data-hungry. Patient records and scans used to train AIs must be handled with care to protect confidentiality. There’s also the question of consent – should patients be informed or asked permission if an AI will assist in their care? Many argue transparency with patients is important to maintain trust. If an AI is used, say, to read a biopsy, the patient might want to know and even know the AI’s track record. On legal liability, so far no clear precedent has been set. Likely, if a physician relies on an AI and a misdiagnosis occurs, the physician (and hospital) would still bear responsibility under current law – much as if a lab test was faulty, the doctor is still expected to use judgment. This may evolve if AI becomes more autonomous. One can imagine future regulations requiring a human “final check” on any AI-derived diagnosis precisely to maintain accountability.

Lastly, approval and oversight processes are struggling to keep pace. As of mid-2025, the FDA’s list of AI devices had not been updated for months due to internal transitions statnews.com statnews.com, even though authorizations continued. Some experts worry about a regulatory gap: dozens of AI tools on the market, but no central public tracking of their performance issues or updates (some AI models get software updates that might alter behavior). The U.S. FDA has floated the idea of requiring algorithm manufacturers to monitor real-world performance and report if, say, accuracy drifts over time – a concept akin to post-market surveillance. The World Health Organization has urged a cautious approach, saying we need robust evidence and oversight frameworks before unleashing AI broadly in healthcare. Meanwhile, an Alliance for AI in Healthcare is being convened to create guardrails and best practices for responsible AI, bringing together experts, industry, and policymakers weforum.org weforum.org. Everyone recognizes the potential benefits are huge, but so are the stakes if something goes wrong. As Dr. Katherine Halliday, President of the UK Royal College of Radiologists, summed up: “Successfully harnessing AI will be crucial to tackle pressures on the health service… Our research shows that people trust doctors and want them to oversee the use of AI in healthcare.” rcr.ac.uk rcr.ac.uk In short, keep the human in the loop, even as the machines get smarter.

Doctors and AI: Partners in Training and Practice

What does the rise of AI diagnostics mean for medical professionals? For many doctors, especially those in training, it’s both exhilarating and anxiety-inducing. On one hand, AI promises to relieve burdens – automating drudge work like writing chart notes or sifting through hundreds of images, thereby cutting burnout and freeing up time for patient care. (In fact, a survey by the American Medical Association found reducing administrative burden is one of physicians’ top hopes for AI ama-assn.org.) On the other hand, some fear becoming overly dependent on AI or even being displaced in certain roles. The consensus among experts, however, is that doctors’ roles will evolve, not evaporate. AI may change how doctors work, but it won’t obviate the need for human judgment, empathy, and the “art” of medicine.

Medical schools and training programs are beginning to adapt to this new reality. Some are introducing coursework on AI basics, so that future clinicians understand what algorithms can and cannot do. Trainees may soon practice reading scans with an AI assistant by their side, learning to cross-check and interpret its outputs. Rather than memorizing every obscure syndrome (which an AI can do better), the emphasis could shift to critical thinking, communication, and ethics in an AI-enabled environment. In essence, doctors will need to become proficient at “AI literacy” – knowing when to trust the tool, when to be skeptical, and how to integrate AI findings into holistic patient care. As one popular adage (attributed to professor Avi Goldfarb) puts it: “AI is a prediction machine – it can tell you what might be going on, but it can’t tell you what to do about it.” The latter part – deciding on treatment, understanding patient preferences, managing trade-offs – remains squarely in the human domain for now.

In day-to-day practice, doctors are already learning to work alongside AI in small ways. Take radiologists: in some hospitals, their workflow includes an AI that pre-screens today’s scans and triages them. A radiologist might come into work to find that the AI has flagged five chest CTs as high priority (possible pulmonary embolism) – those go to the top of the pile. They read those first, confirming the AI’s findings or correcting them, then move on. Radiologists must be cautious not to develop “automation bias” – blindly trusting the AI. Training is focusing on maintaining vigilance and always performing one’s own review. Interestingly, studies have found that the best outcomes occur when AI and physician disagree – that’s when the second set of eyes is most valuable in catching an error hms.harvard.edu hms.harvard.edu. So rather than rubber-stamping the AI’s opinion, doctors are taught to especially double-check cases where their gut and the algorithm differ.

In fields like pathology, some pathologists now use digital slides with AI overlays that point out regions likely containing cancer. The pathologist reviews those regions more closely (saving time scanning the whole slide manually) and then makes the diagnosis. This means pathologists may handle more cases in a day with AI’s help, alleviating backlogs. But they also must guard against the risk of missing something the AI missed – hence they still do an overall scan of the slide, just in a more targeted way. Clinical decision support AIs – like diagnostic chatbots or symptom-checkers – are also finding a place. It’s becoming more common for a doctor to input a complex case’s details into an app to see what differential diagnoses the AI suggests, as a “safety net” for clinical reasoning. It’s akin to consulting a digital colleague. The doctor might use that to ensure they haven’t overlooked a rare possibility. Ultimately, the physician synthesizes the AI’s input with the patient’s unique context.

Medical professional organizations are pushing to ensure AI implementation enhances rather than diminishes the doctor-patient relationship. They emphasize that compassion, intuition, and ethical judgement are qualities of human clinicians that AI lacks. As Dr. Topol noted, because doctors today are so overburdened with data entry and bureaucracy, they often have little time for patients: “We want to have presence, trust and a bond… during a visit,” he said, lamenting that current practice doesn’t always allow it nihrecord.nih.gov nihrecord.nih.gov. By delegating scut work to AI (like taking notes or processing routine diagnostics), doctors could reclaim that face-to-face time. In fact, one of the greatest opportunities of AI is to “restore the precious time” between doctors and patients – a point Topol has written about at length. Ironically, the best use of ultra-high-tech AI might be to make medicine more humane: let the machine crunch numbers and recognize patterns, so the doctor can focus on listening, comforting, and thinking creatively about a person’s care. This vision only works if clinicians trust the tools and if patients trust that their doctor is still in charge, however.

Thus, medical societies are developing guidelines and training around AI. The AMA, for example, published a comprehensive report on “augmented intelligence” in health care, stressing that solutions must be physician-led and patient-centered ama-assn.org ama-assn.org. They call for doctors to be involved in AI design, and for proper training so that practitioners understand an algorithm’s limits. New specialties may even emerge – some hospitals now have “clinical AI officers” or committees to evaluate and monitor AI tools. Just as evidence-based medicine became a part of training decades ago, AI literacy and data science fluency might become core competencies for doctors in the coming decade.

Do Patients Trust Dr. AI?

No matter how advanced the technology, its impact depends on whether the public accepts it. So, do patients trust an AI to diagnose them? Surveys suggest mixed feelings that are evolving as people become more familiar with AI in daily life. A UK poll in 2025 found that less than half (46%) of Britons even realized AI was being used in healthcare rcr.ac.uk. Yet when told about specific applications, most supported them. In that survey, a striking 4 in 5 people said they approve of AI helping in radiology – for example, analyzing X-rays – even though only 2 in 5 initially felt “comfortable” with AI in healthcare generally rcr.ac.uk. The difference came down to understanding: the more people knew about AI, the more they trusted it rcr.ac.uk. Among respondents who were very familiar with AI, 76% were comfortable with it being used by doctors rcr.ac.uk.

Crucially, the public does not want AI to replace human doctors. In the same UK survey, overwhelming majorities said AI should be used to assist doctors and save time, not take over the consultation rcr.ac.uk. Only 13% thought radiologists should avoid using any AI tools at all rcr.ac.uk – implying nearly 87% were okay with doctors using AI in some capacity. People’s top concerns were that an AI might make a wrong diagnosis, or that healthcare could become impersonal rcr.ac.uk. They expressed strong trust in doctors to oversee AI use and to make the final judgments rcr.ac.uk. In fact, the public appears far more comfortable when the AI is presented as being under a clinician’s supervision. Transparency helps too: people preferred when AI developments were led or vetted by institutions like the NHS or their personal doctor, rather than abstract tech companies rcr.ac.uk. Data privacy fears also emerge – patients worry about who has access to their health data for AI training and want assurances that it’s handled securely rcr.ac.uk.

In the U.S., surveys by organizations like Pew have similarly found cautious optimism. A majority of Americans say they’re aware AI is increasingly used in care, and many believe it can improve health outcomes. But a significant portion are uncomfortable with AI making a final decision on a treatment. They prefer AI be confined to a second opinion or a screening tool. There’s also a generational divide: younger people tend to trust technology more and are more willing to use AI-driven health apps or chatbots, whereas older patients often want the reassurance of a human professional. Over time, as AI proves its reliability and as success stories circulate, trust may grow. It may also be specialty-dependent. Patients might be fine with an AI reading their lab test or flagging an abnormal scan (which feels akin to an automated lab machine), but they might be less comfortable taking a diagnosis from a chatbot without speaking to a doctor.

Cultural differences matter too. In some countries with limited access to doctors, patients might welcome AI tools if it means getting any diagnosis at all. For example, in rural India or parts of Africa, seeing a specialist can involve long travel and wait times. If a validated AI app on a tablet at a local clinic can screen for oral cancer or TB, people are likely to embrace it, especially if a nurse or community health worker is present to interpret results. Indeed, global health projects have found success with AI-powered diagnostics in low-resource settings. During the COVID-19 pandemic, some countries used AI to read lung scans when radiologists were overwhelmed. In sub-Saharan Africa, an AI that analyzes chest X-rays for tuberculosis (such as Qure.ai’s qXR) has been deployed in screening programs to catch TB earlier radiologybusiness.com radiologybusiness.com. These communities have largely accepted the technology because the alternative is often no screening at all. Qure.ai’s chest X-ray tool, notably, has been deployed in over 100 countries and is described as “as accurate as a radiologist… providing crucial insights in rural or developing areas where specialists may not be available.” radiologybusiness.com radiologybusiness.com That indicates a high degree of trust earned through demonstrated utility.

Public communication will be key to building broader trust. Healthcare providers are encouraged to explain to patients when AI is being used in their care, in plain language. For instance: “This X-ray was also analyzed by a computer program that checks for fractures. It didn’t find anything, and I agree with that – it looks normal.” Such transparency can normalize AI as just another tool, like getting a lab result. If an AI suggests a diagnosis, doctors might say: “One of the diagnostic support tools I use suggested Disease X as a possibility. I think it’s worth checking for that.” Patients tend to respond well when they see AI as an adjunct to their physician’s expertise, not a replacement for it.

Interestingly, when asked hypothetically who they’d trust more for a diagnosis – AI or a human doctor – most people still side with the human doctor. But when presented with scenarios of AI outperforming doctors in accuracy, many say they would like both to be involved. People want the thoroughness of AI and the wisdom of a human. A 2023 international survey found a majority would take an AI-based second opinion if their physician recommended it, but far fewer would accept an AI-only diagnosis without a doctor’s input. This trust landscape could shift over the next decade if AI systems build a track record of safety and if younger, tech-savvy generations become the main consumers of healthcare. For now, public trust remains cautious: supportive of AI in the background, but insistent on human oversight and the human touch in care.

Global Diagnosis: AI’s Impact from Boston to Bangalore

AI’s rise in diagnostics is a global phenomenon, but its impact differs vastly between developed and developing regions. In wealthier healthcare systems (U.S., Europe, Japan, etc.), AI is often viewed as a way to enhance efficiency and accuracy in an already advanced medical service. It can help manage the growing volumes of data and scans, reduce wait times for results, and support overworked staff. For instance, the UK’s NHS is investing in AI to tackle its backlog of imaging studies and specialist referrals. The NHS AI Lab has funded dozens of projects – from algorithms that triage chest X-rays in hours instead of days, to tools that help pathologists quantify cancer cells faster. In one high-profile UK initiative, as mentioned, NICE endorsed several AIs to catch broken bones and avoid missed injuries theguardian.com theguardian.com. The NHS is also piloting an AI-powered system to predict which ER patients are likely to deteriorate, so it can prioritize care accordingly. These efforts are driven in part by staff shortages: Britain faces a 30% shortfall in radiologists rcr.ac.uk, and similar gaps exist in other specialties across many countries. AI is seen as a way to extend the reach of the existing workforce – a force multiplier rather than a magic bullet.

In contrast, in many developing countries, AI has the potential to be more revolutionary: it could bring diagnostic capabilities to places that never had them. Take India, home to over a billion people but with far fewer doctors per capita than the West. AI tools are being rolled out to screen for diseases in rural areas where specialists seldom tread. One notable example is an Indian startup’s AI that interprets chest X-rays to find tuberculosis – a major killer that often goes undiagnosed in villages. This AI, deployed on mobile X-ray vans and clinics, can identify TB-related lung damage within minutes of an X-ray being taken. It allows immediate referral of suspected cases for confirmatory tests, rather than weeks of delay. The World Health Organization recognized such AI-aided TB screening in its guidelines as a viable alternative when radiologists are not available. Similarly, in parts of Africa, a simple smartphone-based AI is being used to screen for cervical cancer via analysis of cervical images, training local nurses to perform early detection that used to require a gynecologist’s eye.

Countries like Bangladesh, Indonesia, and Brazil are also exploring AI for rural healthcare extension. An interesting case is Mediwhale, a South Korean company that built an AI to predict cardiovascular risk from retinal scans. It’s been deployed in hospitals in Dubai, Italy, and Malaysia to detect heart and kidney disease risk just from an eye exam, replacing more invasive tests crescendo.ai crescendo.ai. This kind of leapfrogging – using AI to skip directly to advanced diagnostics without needing expensive labs – could greatly benefit resource-limited healthcare systems. Of course, it requires local validation; an algorithm trained on Korean patients might need retraining on Malaysian patients to ensure accuracy. International collaborations are growing: for instance, Google’s AI for diabetic eye disease was piloted in India and Thailand in partnership with local clinics to fine-tune it to local conditions (like the higher prevalence of certain retinal changes).

One challenge in low-resource settings is infrastructure. AI often needs reliable electricity, internet, and hardware (decent cameras, computers). Some rural clinics lack these. So innovators are developing offline-capable AIs that can run on mobile devices or rugged laptops, and are integrating AI into battery-powered point-of-care devices. For example, a portable ultrasound machine connected to an AI app can help a non-specialist detect pregnancy complications or liver tumors in a remote community. Such projects are underway in parts of sub-Saharan Africa and South Asia.

Another factor is cost. While AI software can be expensive to develop, once created it can be distributed at low marginal cost (aside from hardware). This raises hope that AI diagnostics could become very cheap per patient. Bill Gates recently predicted that within 10 years, “great medical advice [will] become free and commonplace” thanks to AI beckershospitalreview.com beckershospitalreview.com. He envisions AI-driven health tutors accessible on any smartphone, guiding patients in poor regions and essentially democratizing expertise. We’re not there yet, but pilot programs like an AI health assistant in Rwanda or a maternal health chatbot in Nigeria are testing the waters. Importantly, global health experts caution that technology alone can’t solve systemic issues – AI must be paired with strengthening basic healthcare delivery. But it can certainly help fill gaps: an AI might diagnose malnutrition via a photo, but you still need a supply of food and medicine to treat the child.

In middle-income countries like China and India, there’s actually a fervent embrace of medical AI at government and industry levels. China has dozens of AI startups focusing on diagnostics, spurred by government funding as part of its AI strategic plan. Chinese hospitals have tested AI systems for reading CT scans for COVID-19 pneumonia, for example, and for early cancer detection. The scale of data there (huge populations, centralized health systems) could allow Chinese AIs to advance quickly. India, for its part, included AI in its national digital health strategy, and we see private hospitals using AI radiology systems imported or developed indigenously. One Indian hospital chain partnered with Qure.ai to use its head CT scan AI for quicker stroke and head injury triage – a big help where radiologists are few at night qure.ai businesswire.com.

However, one must be mindful of global inequities possibly being mirrored or even exacerbated by AI. If most AI development happens in wealthy nations focused on their own disease patterns (e.g., heart disease, breast cancer in older adults), there’s a risk that diseases of the poor (like malaria, or childhood diarrhea) get less AI innovation. International bodies like the WHO have called for “AI for health” to prioritize where it can save the most lives, not just where the market is. Encouragingly, some of the TIME100 “Most Influential Companies” this year in AI were those explicitly focusing on global health. For example, Qure.ai’s recognition by TIME was largely for its impact in low and middle-income countries and its work in tuberculosis and lung disease surveillance at scale radiologybusiness.com radiologybusiness.com. Qure’s CEO Prashant Warier said it’s driven by “a simple but urgent belief: that access to timely, high-quality diagnostics should not depend on where you live.” radiologybusiness.com. That sentiment captures the transformative promise of AI in a global context – to help level the playing field in healthcare.

The Next 5–10 Years: A Forecast

Looking ahead, what can we expect in the next decade of AI-driven diagnosis? If the past two years are any indication, acceleration is the keyword. Experts predict that in 5–10 years, AI will be an invisible yet ubiquitous part of healthcare, much like electricity or basic imaging is today. Here are some likely developments on the horizon:

  • Routine AI Checkups: Just as you get blood work and vitals at an annual physical, you might soon get an “AI risk screening.” This could involve algorithms analyzing your past medical history, current labs, perhaps a genome scan and even your wearable device data to predict health issues before they fully manifest. For instance, models that predict your 10-year risk of diseases (heart attack, diabetes, even Alzheimer’s) will become more refined, allowing truly preventive medicine. AstraZeneca recently showcased an AI that could detect signals of over 1,000 diseases years before symptoms by crunching huge biobank datasets weforum.org weforum.org. Such predictive power, if validated, means doctors could intervene much earlier. By 2030, we might see AI-driven “digital twins” of patients – virtual profiles that simulate a person’s health trajectory and response to treatments, guiding personalized prevention.
  • AI as a First-Line Diagnostician: We may reach a point where for many common illnesses, an AI system provides the initial diagnosis or triage, either directly to patients via telehealth or at the point of care. Imagine waking up ill and speaking to your phone’s health AI. It listens to your symptoms, maybe looks at a quick selfie (to check skin pallor or throat via camera), and suggests it’s likely a strep throat – directing you to a nearby clinic to confirm and get antibiotics. This level of direct diagnosis will depend on further improvements and, importantly, public trust and regulatory approval. But it’s plausible that low-risk conditions could be largely handled by AI-driven interfaces with remote physician oversight. Some healthcare providers are already experimenting with automated triage bots to advise patients if they can self-care, see a GP, or go to ER. In 10 years, those could be much more advanced and handle a wide range of complaints with high accuracy.
  • Integration into Medical Devices: The stethoscope was once cutting-edge – and the next-gen stethoscope might be an AI in your smartwatch. We anticipate more consumer gadgets with diagnostic AI: smartwatches that can flag irregular heart rhythms (already happening) or even detect sleep apnea, smartphone cameras that can screen for anemia or kidney disease via an eye scan (research is underway on those). By 2030, perhaps every new medical imaging machine (X-ray, MRI, ultrasound) will come with built-in AI analysis as a standard feature. It will be unthinkable to have a CT scanner that doesn’t automatically run an AI check for dozens of pathologies in each scan. This doesn’t replace the radiologist, but it’s like having a perpetual safety net and second read. In pathology labs, AI might examine every Pap smear or blood smear as a first pass, ensuring nothing gets missed in the human review. The workflow will simply assume AI is part of it.
  • FDA-Approved AI “Clinicians”: We will likely see the first regulatory approvals of autonomous AI diagnostic systems in more areas. The first was in eye care (IDx-DR). Coming up could be an autonomous AI for reading screening mammograms or lung CT scans without a radiologist, or an AI that can diagnose dermatological conditions from images for pharmacists or nurses to use. Regulators might create new categories for AI that can act without physician sign-off for certain bounded tasks, especially where specialists are scarce. This will be gradual, with lots of clinical trial evidence required. But for example, if a company can prove its skin lesion AI has a cancer detection sensitivity equal to dermatologists in a large trial, regulators might allow it to be used by general practitioners to directly refer patients to surgery, bypassing specialist consult. Such approvals would mark a watershed where AI is not just assisting but delivering care in specific niches.
  • Global Spread and Leapfrogging: In developing nations, as infrastructure improves, AI tools will become even more widespread. Some countries might even leapfrog traditional healthcare development stages – for instance, building AI-enabled telehealth networks before they have enough specialists locally. By 2030, it’s conceivable that a significant portion of diagnoses in places like rural India or Africa are made with AI help, supervised by mid-level providers. This could radically improve outcomes for diseases like TB, HIV (AI could help target testing and treatments), and maternal-fetal health (with AI ultrasound). On the flip side, global deployment means training AI on diverse populations, so we may see more collaborations to build international datasets ensuring AI accuracy across different ethnicities and environments. AI might also help monitor and predict outbreaks by analyzing data across regions (blending diagnostics with public health).
  • Closer Doctor-AI Collaboration: Rather than a threat to doctors, AI may become an integral part of medical education and daily practice. The next generation of physicians could routinely “consult” their AI as they would a colleague. Clinical rounds might involve asking the AI for its read on the patient’s test trends or what rare diseases could explain a puzzling case. This two-way collaboration might even extend to AI suggesting clinical trial options or new research findings relevant to the patient in real-time. AI could function like a super-informed medical librarian at the bedside. Over time, as comfort grows, doctors might offload more analytical tasks, focusing training more on human skills – communication, ethical decision-making, procedural skills, and so on, which AI cannot replace.
  • Ethical and Regulatory Maturity: In a decade, we should have much more clarity in the rules of the road. Expect refined FDA frameworks for continuous-learning AI algorithms, perhaps requiring periodic re-certification as models evolve. There might be industry standards for transparency – e.g., requiring AI systems to provide an explanation or confidence score with any diagnosis. Legal statutes might delineate liability in AI-related errors (for instance, maybe establishing that if a doctor followed all recommended guidelines for using an approved AI, the liability for a hidden AI flaw lies with the manufacturer). We’ll also likely see more real-world monitoring: like “FDA-postmarket dashboards” tracking AI performance and safety issues, analogous to how drug side effects are monitored. If an AI anywhere is found to systematically err on a type of case, an alert could be issued or the model updated. Essentially, AI in medicine will move from the Wild West to an established, quality-controlled part of healthcare.

Of course, this rosy forecast assumes we navigate current challenges successfully. It’s possible there will be setbacks: a high-profile AI misdiagnosis scandal could dampen enthusiasm temporarily. Or doctors might resist adoption if tools are forced on them without proper integration (already, some complain about AI that creates alert fatigue or doesn’t fit their workflow). Additionally, issues of data privacy or misuse could provoke public backlash (imagine if an insurer used AI to deny coverage based on subtle health predictions – a dystopian scenario that regulators would need to prevent).

But if the trajectory of progress and learning continues, by 2035 a visit to your clinic might feel quite different. You might speak first to a friendly AI that documents your history (freeing the nurse or doctor from typing). You’ll get the necessary scans or tests, which AI will immediately analyze and have ready for the doctor. When your human doctor comes in, they already have preliminary results and AI-backed suggestions. This doctor spends most of the appointment talking with you, explaining and advising – not buried in a computer screen. It’s a more efficient process, yet also more personal, because technology handled the grunt work. After the visit, an AI system might even generate a plain-language summary for you and check in on you via chatbot about your symptoms or medication adherence.

In essence, the hopeful vision is “medical AI on tap, human care on top.” AI will be everywhere in the diagnostic process, but ideally you won’t experience it as a cold machine overlord – rather, as something that empowered your caregivers to focus on you. Achieving this future will take careful design, validation, and training, as well as ongoing vigilance about ethical use. As Dr. Topol said, “This is a really exciting time in medicine… We’ve never had this opportunity before.” nihrecord.nih.gov nihrecord.nih.gov The opportunity is to harness superhuman pattern-recognition and speed, and channel it into better human health. The coming years will reveal how well we seize it.

Conclusion

AI’s transformation of medical diagnostics is no longer a theoretical promise – it’s unfolding now in hospital wards, labs, and even our phones. An algorithm might soon be the one to catch your cancer early on a scan or predict a health crisis before it happens. But rather than displacing doctors, these superhuman code-driven “colleagues” are most powerful as part of a team. The stories emerging – an AI that finds what 17 doctors missed, a program that cuts missed fractures by 40%, a digital assistant that saves a patient’s life via an EKG alert – all point to a future where AI extends the reach and ability of healthcare providers. We are on the cusp of a world where geography and resource limits need not dictate the quality of diagnosis you receive, thanks to AI that carries expert-level knowledge wherever a computer can go.

Yet, for all the computational brilliance, healthcare remains a profoundly human endeavor. Diagnosis is not just pattern recognition; it’s communication, empathy, and wisdom drawn from experience. The rise of AI in medicine doesn’t change that – it challenges us to elevate those human elements. Public trust will hinge on feeling that AI is augmenting the care they get, not turning them into a datapoint in a machine’s ledger. So far, the signs are encouraging: patients want their doctors to have the best tools (including AI), and doctors want AI that makes their patients’ lives better. The coming years will be about refining this partnership.

“AI just diagnosed you – now what?” Now, your doctor has more information than ever to guide your treatment, and hopefully more time to discuss it with you. Now, healthcare systems must ensure these tools are used responsibly, equitably, and effectively. Now, regulators have to keep up with innovation to protect patients without stifling progress. And for us as a society, now is the time to thoughtfully integrate these superhuman doctors made of code such that they uphold the values of medicine. If we get it right, the next generation might look back and marvel at how anyone got along without an AI second opinion. The ultimate measure will be healthier, longer lives and a healthcare system that is more accessible and humane. Achieving that is a challenge – but as we’ve seen, the fusion of human caring and artificial intelligence is already yielding extraordinary results, and it just might redefine the art of healing in the 21st century.

Sources: news-medical.net heart.org statnews.com fiercebiotech.com theguardian.com rcr.ac.uk time.com time.com hms.harvard.edu ama-assn.org radiologybusiness.com statnews.com