Learning precise segmentation of neurofibrillary tangles from rapid manual point annotations

Kavli Affiliate: Michael Keiser

| Authors: Sina Ghandian, Liane Albarghouthi, Kiana Nava, Shivam R. Rai Sharma, Lise Minaud, Laurel Beckett, Naomi Saito, Charles DeCarli, Robert A. Rissman, Andrew F. Teich, Lee-Way Jin, Brittany N. Dugger and Michael J. Keiser

| Summary:

Accumulation of abnormal tau protein into neurofibrillary tangles (NFTs) is a pathologic hallmark of Alzheimer disease. Accurate and efficient detection and quantification of NFTs in tissue samples aids in deeper phenotyping of Alzheimer disease and may reveal relationships with clinical, demographic, and genetic features. However, expert manual analysis can be time-consuming, subject to observer variability, and limited in handling the large amounts of data generated by modern imaging techniques. We present a scalable, open access, deep learning-based approach to quantify the NFT burden in digital whole slide images (WSIs) of post-mortem human brain tissue. We trained a UNet model on 45 annotated 2400 μm by 1200 μm regions of interest (ROIs) selected from 15 unique WSIs of temporal cortex from Alzheimer disease cases from three institutes (University of California (UC)-Davis, UC-San Diego, and Columbia University). We developed a method to generate detailed segmentation ground truth masks at the pixel level directly from simple point annotations. The model achieved a precision of 0.53, recall of 0.60, and F1 score of 0.53 on a held-out test set of 7 WSIs, providing researchers with an efficient and reliable tool for NFT burden quantification. We compared this to an object detection model on the same dataset, which achieved comparable but more coarse-grained performance. Both models correlated with expert semi-quantitative scores at the whole-slide level. Our approach provides an open deep learning pipeline for detailed and scalable NFT spatial distribution and morphology analysis across large cohorts, which is not feasible through manual assessment.

Read More