
The adoption of digital pathology has enabled the curation of large repositories of gigapixel whole-slide images (WSIs), which are invaluable for examining cellular morphology and its changes during embryonic development or disease progression. Many existing methods utilize well-trained deep neural networks to extract image features from histology images, which are then used for downstream analysis. A key drawback of using deep neural networks—such as Vision Transformer (ViT), HIPT, and UNI—is that these models require a large number of well-annotated images from pathologists for training, which limits their applicability.
One focus of my lab is to develop label-free, AI-driven methods for medical imaging data analysis. These tools can be easily applied to studies with limited training samples, eliminating the need for cumbersome labeling steps. In addition, compared to existing methods that rely on ‘black-box’ modeling, our approach offers higher interpretability in imaging analysis, enhancing the transparency and reliability of the results.
Related publications: TESLA, MorphLink.
In collaboration with: Linghua Wang, Nan Ma