Proceedings Article | 4 April 2022
Abhi Lad, Adithya Narayan, Hari Shankar, Shefali Jain, Pooja Punjani Vyas, Divya Singh, Nivedita Hegde, Jagruthi Atada, Jens Thang, Saw Shier Nee, Arunkumar Govindarajan, Roopa PS, Muralidhar Pai, Akhila Vasudeva, Prathima Radhakrishnan, Sripad Krishna Devalla
KEYWORDS: Image segmentation, Fetus, Brain, Performance modeling, Data modeling, Acoustics, Neuroimaging, Image quality, Ultrasonography, Visualization
Quality assessment of prenatal ultrasonography is essential for the screening of fetal central nervous system (CNS) anomalies. The interpretation of fetal brain structures is highly subjective, expertise-driven, and requires years of training experience, limiting quality prenatal care for all pregnant mothers. With recent advancement in Artificial Intelligence (AI), computer assisted diagnosis has shown promising results, being able to provide expert level diagnosis in matter of seconds and therefore has the potential to improve access to quality and standardized care for all. Specifically, with advent of deep learning (DL), assistance in precise anatomy identification through semantic segmentation essential for the reliable assessment of growth and neurodevelopment, and detection of structural abnormalities have been proposed. However, existing works only identify certain structures (e.g., cavum septum pellucidum [CSP], lateral ventricles [LV], cerebellum ) from either of the axial views (transventricular [TV], transcerebellar [TC]), limiting the scope for a thorough anatomical assessment as per practice guidelines necessary for the screening of CNS anomalies. Further, existing works do not analyze the generalizability of these DL algorithms across images from multiple ultrasound devices and centers, thus, limiting their real-world clinical impact. In this study, we propose a deep learning (DL) based segmentation framework for the automated segmentation of 10 key fetal brain structures from 2 axial planes from fetal brain USG images (2D). We developed a custom U-Net variant that uses inceptionv4 block as a feature extractor and leverages custom domain-specific data augmentation. Quantitatively, the mean (10 structures; test sets 1/2/3/4) Dice-coefficients were: 0.827, 0.802, 0.731, 0.783. Irrespective of the USG device/center, the DL segmentations were qualitatively comparable to their manual segmentations. The proposed DL system offered a promising and generalizable performance (multi-centers, multi-device) and also presents evidence in support of device-induced variation in image quality (a challenge to generalizibility) by using UMAP analysis. Its clinical translation can assist a wide range of users across settings to deliver standardized and quality prenatal examinations.