The light in a hospital pathology lab is consistently cool and somewhat institutional, the kind of fluorescent brightness that makes no one feel good. Trays hold glass slides. Leaning over microscopes, pathologists exhibit the unique focused stillness of individuals performing tasks that demand total focus. For over 150 years, that image—the skilled specialist, the glass slide, the lens—has served as the essential component of illness diagnosis. The hematoxylin and eosin stain, which was created in 1865 and is still the gold standard today, gives tissue samples a pink and purple hue while revealing cell nuclei. This setup is the starting point for some of the most important medical decisions in a patient’s life, such as whether a mass is malignant, what grade it has, and how to treat it. A person, a microscope, a slide.
That arrangement is evolving. Not fast enough to cause an immediate disruption, but persistent enough that the field is now seriously considering issues that it hardly needed to address five years ago.
The turning point occurred in 1994 when James Bacus created the BLISS system, the first commercial slide scanner capable of producing a high-resolution digital image from a physical glass slide. In its early stages, the technology was intriguing but cumbersome, with large images and little software. The maturation of whole-slide imaging into a dependable clinical tool and the explosion in deep learning capability that started making machine-learning systems truly competitive with human experts at pattern recognition tasks were two independent developments that came about at about the same time and altered the equation.
AI in Pathology / Digital Pathology: Key Facts & Reference
| Field | Details |
|---|---|
| Field | Digital Pathology & AI-Assisted Diagnostics |
| Core Technology | Whole Slide Imaging (WSI) — converts glass slides into gigapixel digital images |
| First Commercial Slide Scanner | BLISS system — invented by James Bacus, 1994 |
| Diagnostic Gold Standard (Still) | Hematoxylin and Eosin (H&E) staining — introduced by Franz Böhm, 1865 |
| Key AI Architectures | Convolutional Neural Networks (CNNs); Deep Learning (DL); Foundation Models |
| Foundation Models in Pathology | UNI, Virchow, PathChat — fine-tunable across multiple diagnostic tasks |
| FDA-Approved Tool | Paige Prostate Detect — assists prostate cancer detection from digital slides |
| Google Health AI | Deep learning system for detecting metastatic breast cancer from biopsy slides with higher sensitivity than pathologists |
| Ibex Medical Analytics | Galen™ platform — detects prostate and breast cancer patterns in scanned slides |
| Key Application Areas | Mitosis counting, lymph node micrometastasis screening, Gleason scoring, mutation prediction from H&E slides |
| Predictive Pathology | AI can predict genetic mutations and molecular profiles directly from standard H&E-stained slides |
| Accuracy vs. Human | AI matches or exceeds pathologist performance in specific classification tasks; Gleason scoring consistency improved |
| Key Challenge — Black Box | Deep learning models often lack interpretability; clinicians cannot always trace diagnostic reasoning |
| Key Challenge — Generalizability | Models can underperform on diverse or unseen populations |
| Future Pathologist Role | “Diagnostic integrator” — synthesizing AI output with clinical context, not a slide reader |
| Historical Parallel | Immunohistochemistry (1941) and genomic medicine were previous “revolutions” — AI is considered the third |
| Key Review Paper | El-Khoury & Zaatari, “The Rise of AI-Assisted Diagnosis: Will Pathologists Be Partners or Bystanders?” — Diagnostics (Sep 2025) |
| Hematology Application | AI analyzes blood cell morphology, classifies hematologic malignancies, predicts prognosis — International Journal of Clinical Practice (2026) |
| Key Reference — PMC | The Rise of AI-Assisted Diagnosis: Will Pathologists Be Partners or Bystanders? — PMC |
| Key Reference — Nature | How artificial intelligence is transforming pathology — Nature |

Convolutional neural networks that have been trained on millions of digital slides never tire. There is no variable day for it. With a consistency that no single pathologist can match across thousands of samples, it processes a gigapixel image and detects mitotic figures—cells caught in the act of dividing, which indicate tumor aggressiveness. In a field that had been closely watching, Google Health’s AI model for detecting metastatic breast cancer sent a clear signal when it showed greater sensitivity than a panel of human pathologists in controlled testing. The FDA authorized Paige.AI’s prostate cancer detection tool, Paige Prostate Detect, making it the first AI-powered pathology product approved for clinical use in the US. Ibex Medical Analytics used the Galen platform to evaluate breast and prostate cancer. These are no longer research demonstrations. They are operating in clinical environments.
The transition from narrow AI—a model created for a single task, such as counting mitoses or screening lymph nodes for micrometastases—to so-called foundation models is what makes the current era especially fascinating. Similar to how the large language models that revolutionized natural language processing have been adapted across various domains, systems such as UNI, Virchow, and PathChat are trained on massive repositories of whole-slide images and can be refined for a variety of diagnostic applications. The debate in pathology regarding whether these foundation models will fulfill their promise or introduce new failure modes that are more difficult to identify precisely due to the models’ apparent capability was detailed in a 2025 review published in Nature. There is still tension and it hasn’t been resolved.
The most common argument for AI in pathology is that it solves a structural issue the field faces regardless of how AI advances, not that it replaces pathologists (the majority of researchers and clinicians reject that framing). In many parts of the world, pathology is understaffed. As the incidence of cancer increases and tissue analysis becomes more important in precision medicine, diagnostic workloads are increasing. By offering an objective, repeatable baseline, AI can quantitatively reduce interobserver variability, a known and documented issue where two skilled pathologists grade the same prostate tumor differently using the Gleason scoring system. In this sense, the technology comes as a solution to a capacity issue that cannot be fully resolved by further training of human pathologists, rather than as a rival.
The “black box” issue is the most valid worry, and it’s still unclear exactly where the technology’s limits are. Deep learning models use difficult-to-audit or explain processes to reach their conclusions. In theory, a pathologist who grades a slide incorrectly could be asked to demonstrate their reasoning by pointing to a cell pattern, describing what they saw, and providing an explanation of their interpretation. When a neural network misclassifies, it frequently is unable to provide a comparable explanation. That opacity is a real constraint, not merely a philosophical issue, for a field where diagnostic error directly affects patient care.
As this field develops, there’s a sense that pathology is going through something similar to what radiology went through when digital imaging revolutionized that field: a period of conflict between new capabilities and established practice, followed by a gradual integration where the new tools became indispensable without completely eradicating the human judgment that gives diagnosis its clinical meaning. Future pathologists will probably perform essentially the same tasks, but AI will handle the repetitive, routine, and computationally demanding tasks, while human expertise will focus on the complex, ambiguous, and cases where clinical context is crucial. Depending on who you ask in the field, that future may be ten years away or partially here already.
