Purpose: The Breast Pathology Quantitative Biomarkers (BreastPathQ) Challenge was a Grand Challenge organized jointly by the international society for optics and photonics (SPIE), the American Association of Physicists in Medicine (AAPM), the U.S. National Cancer Institute (NCI), and the U.S. Food and Drug Administration (FDA). The task of the BreastPathQ Challenge was computerized estimation of tumor cellularity (TC) in breast cancer histology images following neoadjuvant treatment. Approach: A total of 39 teams developed, validated, and tested their TC estimation algorithms during the challenge. The training, validation, and testing sets consisted of 2394, 185, and 1119 image patches originating from 63, 6, and 27 scanned pathology slides from 33, 4, and 18 patients, respectively. The summary performance metric used for comparing and ranking algorithms was the average prediction probability concordance (PK) using scores from two pathologists as the TC reference standard. Results: Test PK performance ranged from 0.497 to 0.941 across the 100 submitted algorithms. The submitted algorithms generally performed well in estimating TC, with high-performing algorithms obtaining comparable results to the average interrater PK of 0.927 from the two pathologists providing the reference TC scores. Conclusions: The SPIE-AAPM-NCI BreastPathQ Challenge was a success, indicating that artificial intelligence/machine learning algorithms may be able to approach human performance for cellularity assessment and may have some utility in clinical practice for improving efficiency and reducing reader variability. The BreastPathQ Challenge can be accessed on the Grand Challenge website. |
1.IntroductionNeoadjuvant treatment (NAT) of breast cancer is the administration of therapeutic agents before surgery; it is a treatment option often used for patients with locally advanced breast disease1 and more recently is an acceptable option for operable breast cancer of certain molecular subtypes. The administration of NAT can reduce tumor size, allowing patients to become candidates for limited surgical resection or breast-conserving surgery rather than mastectomy.1 In addition to affecting parameters, such as histologic architecture, nuclear features, and proliferation,2 response to NAT may reduce tumor cellularity (TC), defined as the percentage area of the overall tumor bed comprising tumor cells from invasive or in situ carcinoma:3 While tumor response to NAT may or may not manifest as a reduction in tumor size, overall TC can be markedly reduced,4 making TC an important factor in the assessment of NAT response. TC is also an important component evaluated as part of the residual cancer burden index5 that predicts disease recurrence and survival across all breast cancer subtypes.In current practice, TC is manually estimated by pathologists on hematoxylin and eosin (H&E)-stained slides, a task that is time consuming and prone to human variability. Figure 1 shows examples of various levels of TC within different regions of interest (ROIs) on an H&E stained slide. The majority of practicing pathologists have not been trained to estimate TC as this measurement was only proposed by Symmans et al.6 in 2007, and it is currently not part of practice guidelines for reporting on breast cancer resection specimens. That being said, the use of TC scoring is expected to grow because the quantitative measurement of residual cancer burden has proven effective in NAT trials. There is great potential to leverage automated image analysis algorithms for this task to
Digital analysis of pathology slides has a long history dating to the mid-1960’s7 with early work by Mendelsohn et al.8 analyzing cell morphology from digital scanning cytophotometer images.9 More recently, advances in whole slide imaging (WSI) technologies and the recent U. S. Food and Drug Administration (FDA) clearances of the first two WSI systems for primary diagnosis have accelerated efforts to incorporate DP into clinical practice. An important potential benefit of WSI is the possibility of incorporating artificial intelligence/machine learning (AI/ML) methods into the clinical workflow.10 Such methods utilize multidimensional connected networks that can progressively develop associations between complex histologic image data and image annotations or patient outcomes, without the need for engineering handcrafted features employed with more traditional machine learning approaches. The potential of AI/ML to improve pathology workflow has been discussed in recent literature.10–13 However, it is challenging to selectively choose the best methods for a given clinical problem because of the vast number of techniques and out-of-the-box models available to algorithm developers, differences between testing datasets, methods for defining a reference standard, and the metrics used for algorithm evaluation. Global image analysis challenges, such as Cancer Metastases in Lymph Nodes (CAMELYON)14 and Breast Cancer Histology (BACH),15 have been instrumental in enabling direct comparisons of a range of techniques in computerized pathology slide analysis. Public challenges, in general, in which curated datasets are released to the public in an organized manner, are useful tools for understanding the state of AI/ML for a task because they allow algorithms to be compared using the same data, reference standard, and scoring methods. These challenges can also be useful for improving our understanding of how different choices for reference standards or a different performance metric impact AI/ML algorithm performance and interalgorithm rankings. This paper describes a challenge directed toward understanding automated TC assessment. The international society for optics and photonics (SPIE), the American Association of Physicists in Medicine (AAPM), the U.S. National Cancer Institute (NCI), and the U.S. Food and Drug Administration (FDA) organized the Breast Pathology Quantitative Biomarkers (BreastPathQ) Grand Challenge to facilitate the development of quantitative biomarkers for the determination of cancer cellularity in breast cancer patients treated with NAT from WSI scans of H&E-stained pathological slides. The Grand Challenge was open to research groups from around the world. The purpose of this paper is to describe the BreastPathQ Challenge design and evaluation methods and to report overall performance results from the Grand Challenge. 2.Materials and Methods2.1.Data AcquisitionThe dataset for this challenge was collected at the Sunnybrook Health Sciences Centre, Toronto, Canada, following approval from the institutional Ethics Board.16 The histopathologic characteristics of the 121 slides from the 64 patients participating in the original study are provided by Peikari et al.16 The challenge dataset was a subset of slides from this original study that consisted of 96 WSI scans acquired from tissue glass slides stained with H&E, extracted from 55 patients with residual invasive breast cancer on resection specimens following NAT. Slides were scanned at magnification () using an Aperio AT Turbo 1757 scanner (Leica Biosystems Inc., Buffalo Grove, Illinois). Training, validation, and test sets were defined as subsets of the 96 WSI scans: 63 scans (33 patients), 6 scans (4 patients), and 27 scans (18 patients) for the training, validation, and test data sets, respectively. Subsets were defined such that WSI scans from the same patients resided in the same set. As WSI scans are difficult to annotate due to the sheer volume of data contained within each (between and per WSI scan), we asked a breast pathology fellow (path1) to hand-select patches from each WSI scan, with the intention of capturing representative examples of TC ratings spanning the range between 0% and 100%. This was done using the Sedeen Viewer17 (Pathcore, Toronto, Canada). The pathologist drew a small rectangle at the center of the desired patch and then a plugin was used to automatically generate a rectangular ROI of around this point. These regions were then passed to an open-source API, Openslide,18 to automatically extract image patches from the WSI scans, which were then saved as uncompressed TIFF image files. Resulting image files were renamed to reference the WSI scan from which each patch originated. All identifiers were anonymized to maintain patient confidentiality. For each patch, a TC rating, ranging from 0% to 100%, was provided by the pathologist, based on the recommended protocol outlined by Symmans et al.6 Patches that did not contain any tumor cells were assigned a TC rating of 0%. The training and validation sets were only annotated by path1, whereas the test set was annotated by both path1 and a breast pathologist (path2). Both path1 and path2 had over 10 years of experience.16 Annotations were performed independently, and therefore, each pathologist was unaware of the rating assigned by the other. The distribution of pathologist manual TC ratings used as the reference standard in this challenge for the training, validation, and test sets is given in Fig. 2. The number of patches for which reference standard scores were provided was 2394, 185, and 1119 for training, validation, and test sets, respectively. Full WSI datasets, in addition to patches, were made available upon request on a password-protected Amazon cloud-based platform, along with instructions for usage of high-resolution DP WSI scans in an image analysis pipeline. Participants were able to request access to the platform via email at the time of the challenge. 2.2.Auxiliary Cell Nuclei DatasetIn addition to image patches extracted from WSI scans, participants were also provided with annotations of lymphocytes, malignant epithelial, and normal epithelial cell nuclei in 153 ROIs from the same dataset. These annotations were provided, and participants were permitted to use this in the challenge in addition to the main dataset described above. These data were provided to help developers who wanted to segment cells before calculating a TC score.16 In the auxiliary dataset, cell nuclei were marked manually via a pen tool in Sedeen Viewer,19 and coordinates were stored in an .xml file for each ROI. 2.3.Challenge SetupThe BreastPathQ Challenge was organized with the intention of presenting findings and winners at the BreastPathQ session at SPIE Medical Imaging 2019 (see Sec. 3.2). Participants were allowed to register, and training data were released on October 15, 2018, for the BreastPathQ Challenge. The validation data were released on November 28, 2018, and the test data on December 1, 2018, month before the challenge closed on December 28, 2018. Initially holding the validation and test datasets allowed participants time to design their algorithms before assessing their performance. Participants were tasked with assigning TC scores to individual patches during all three phases of the challenge: training, validation, and test. For training purposes, ground truth labels were provided for the training set upon initial release. Subsequently, the ground truth labels for the validation set were released at the time the test patches were released on December 1, 2018. Ground truth for the test set was held out during the entirety of the challenge, only being accessible by the challenge organizers for evaluating the performance of official submissions. The BreastPathQ utilized an instance of the MedICI Challenge platform to conduct this challenge.20 The MedICI Challenge platform supports user and data management, communications, performance evaluation, and leaderboards, among other functions. The platform was used in this challenge as a front-end for challenge information and rules, algorithm performance evaluation, leaderboards, and ongoing communication among participants and organizers through a discussion forum. The challenge was set up to allow participants to submit patch-based TC scores during the training and validation phases of the challenge and receive prediction probability (PK) performance feedback scores via an automated Python script. The script initially verifies that the submitted score file is valid by checking if the submitted file is formatted correctly and that all patches have a score. An invalid submitted score file was not considered part of the submission limit for the participants. The same evaluation script was used for the training, validation, and test phases. This enabled participants to validate the performance of their algorithms during development as well as familiarize themselves with the submission process prior to the test phase of the challenge. The submission process involved preparing one TC score per patch in a predefined CSV format described on the website. Participants were also required to provide a description of their submitted method in the form of a two-page algorithm summary as part of the test phase of the challenge. Participants who implemented deep neural networks were asked to provide a description of their network architecture including batch size, optimizer, and out-of-the-box models. Each participant group was allowed up to three valid submissions to be submitted for test set evaluation. Participants were permitted to use additional data in the algorithm development process for pretraining, augmentation, etc., including using their own training data obtained outside the challenge. 2.4.ParticipantsPrior to the test submission deadline on December 28, 2018, there were a total of 317 registrants. During each phase of the challenge, 74, 551, and 100 valid submissions were received for the training, validation, and test phases, respectively. A “valid” submission refers to patch-level predictions successfully submitted by a registered participant. A description of the algorithm(s) was also required as part of a valid test submission. A leaderboard was generated for each phase of the challenge except the test phase; it was updated after each successful submission and was made visible to all participants. The test leaderboard results were hidden from the participants. Results of the challenge were announced at a special SPIE BreastPathQ session that took place during a joint session with the 2019 SPIE Medical Imaging Computer Aided Diagnosis conference and Digital Pathology conference in San Diego, California, held from February 16 to 21, 2019. During the session, the top two winners presented their algorithm and performance in oral presentations. Other participants were also invited to present their methods in a poster session during the conference. A list of the 39 teams who submitted valid test set entries is provided in Appendix A with teams being allowed to submit up to three algorithms in the test phase. Members of the organizing committee, as well as students and staff from their respective organizations, were not permitted to participate in the challenge due to potential conflicts of interest. 2.5.Evaluation MetricThe primary evaluation metric used for determining algorithm rankings and the winners for the challenge was PK. Intraclass correlation analysis was also performed as a secondary analysis, unknown to the challenge participants, to compare with the PK rankings and results. As two pathologists provided reference standard TC scores for the test set, the PK results based on each individual pathologist were averaged to get a final average PK for each algorithm. The 95% confidence limits [upper and lower bounds (UB and LB, respectively)] of each summary performance score were calculated using bootstrapping (resampling with replacement) 1000 times on a per-patient basis and obtaining the 95% confidence interval using the percentile method. 2.5.1.Prediction probability scorePK21 is a concordance metric that measures the agreement in the ranking of paired cases by two readers or algorithms. It was used as the main evaluation metric for the challenge specifically because it was not clear if the two pathologist TC predictions would be well calibrated due to interpathologist variability. Concordance evaluates the ranking of cases but not the absolute values such that calibration between readers or between a reader and algorithm is not required. Patch ranking was deemed the most important comparison to assess since calibrating an algorithm could potentially be achieved as an additional step for well-performing algorithms. PK is defined as where is the number of concordant pairs, is the number of discordant pairs, and is the number of ties in the submitted algorithm results. PK can be defined as the probability that the method ranks two randomly chosen cases in the same order as the reference standard. It is also a generalization of the trapezoidal area under the receiver operating characteristics curve (AUC) calculation. The PK was calculated by modifying SciPy’s22 implementation of Kendall’s Tau-b. SciPy’s implementation calculates the components (, D, ) needed for the PK estimation such that our modification simply involved using the estimated components to calculate PK. The Python function for calculating PK was made available to all participants of the challenge.2.5.2.Intraclass correlation valueConcordance measures the similarity between the rankings of patches by two readers/algorithms, but it does not require calibration of the algorithm and the reference TC scores. After reviewing the various deep learning algorithm implementations, it was clear that mean squared error (MSE), a correlation measure between an algorithm’s TC outputs and the reference standard values, was commonly used to optimize algorithm performance. Calibration differences between the algorithm and the references do impact MSE. Since MSE was such a common optimization metric, we added a secondary correlation analysis as part of the challenge analysis plan, namely the intraclass correlation coefficient (ICC), to better understand the impact of the performance metric on algorithm rankings. The ICC was calculated using two-way effects with absolute agreement [ICC (2,1) by the Shrout and Fleiss convention23], using the “irr” package24 in R. 2.6.Patch-Based Mean Squared Error AnalysisAnother post hoc analysis performed after completion of the challenge was the calculation of the patch-based average MSE between the pathologists and all submitted algorithms to identify which patches had the largest and the smallest errors in predicting TC. The MSE between each pathologist and the algorithms for an individual patch was calculated as the squared sum across all algorithms of the difference between the pathologist TC score and an individual algorithm TC prediction. The final MSE value was then the average across the two pathologists. A higher MSE indicated that the algorithms performed relatively poorly in predicting the cellularity for a patch, whereas a lower MSE indicated better performance. 3.Results3.1.Submitted AlgorithmsThe BreastPathQ Challenge participants represented a total of 39 unique teams from 12 countries. Almost all of the teams (38/39) used deep convolutional neural networks (CNNs) to build their automated pipelines, with most also using well-established architectural designs (Sec. 3.1.2). The participants also universally employed data augmentation techniques (Sec. 3.1.1) to enhance algorithm training and performance. The remainder of this section summarizes various aspects of the submitted algorithms in more detail, and a brief summary of all submitted methods is provided in Appendix A. 3.1.1.Preprocessing/data augmentationAll participants used some form of data augmentation to increase the size of the original dataset, with most of the participants employing random rotations, flips, and color jittering. Some participants also opted to use the HSV (hue–saturation–value) color space in addition to, or in combination with, the RGB (red-green-blue) color space. 3.1.2.Neural network architecturesThe top 10 performing teams in the BreastPathQ Challenge used deep neural networks to generate TC scores, and they all used pretrained CNN architectures, including Inception25, ResNet,26 and DenseNet.27 Other commonly used CNN architectures included Xception,28 VGG,29 and SENet.30 Other teams developed custom networks. Ensembles of deep learning-based networks were also a common approach for achieving improved algorithm performance. The two top performing teams incorporated squeeze-and-excitation (SE) blocks30 in their pretrained Inception and ResNet models. SE blocks integrate into existing network architectures by learning global properties along with traditional convolutional layers. The SE block itself captures global properties in a network by aggregating feature maps along their spatial dimensions followed by a “self-gating mechanism.”30 Typically, CNN outputs were linearly mapped to scores between 0 and 1, and distance-based loss functions were adopted to perform backpropagation. The most commonly used loss function was MSE; however, other common losses such as least absolute deviation () were also used. The majority of CNNs (except custom-made CNN architectures) used ImageNet31 pretrained weights. Public datasets were also used for pretraining, including the BACH challenge dataset, which includes H&E-stained breast histology microscopy and WSI scans representative of four types of breast cancer.15 One participant also used the large 2018 data science bowl challenge dataset of cell nuclei from various types of microscopic imaging modalities.32 Aside from CNNs, two participants used unlabeled data in the hopes of avoiding overfitting in the task. Team ThisShouldBeOptional pretrained a generative adversarial network (GAN)33 with data from the 2014 International Conference on Pattern Recognition (ICPR) contest34 and then trained on the BreastPathQ dataset, using the discriminator to predict TC scores. Team max0r similarly used the discriminator of an InceptionNet35 adversarial autoencoder to regularize the feature space prior to training for prediction of TC scores. 3.1.3.Cell segmentationThe auxiliary dataset described in Sec. 2.2 was adopted by various participants to incorporate cell segmentation and classification as tumor versus normal in their pipelines. Because the cell nuclei locations were given as coordinates, some participants chose to sample patches centered at the provided coordinates while others simulated segmentation maps by drawing circles around these points (e.g., Team rakhlin). There was a range of different architectures used to perform TC score prediction from cell segmentation maps including U-Net,36 fully connected networks (FCN),37 and custom network designs. 3.1.4.PostprocessingWe found that all participants who used CNNs also employed some sort of ensemble method. Most opted to use -fold cross-validation to split the training set and learn individual models per fold. Final TC scores were achieved mostly through either an averaging/maximum operation or learning a separate regression layer that aggregated penultimate layers in each CNN. Some participants also trained individual CNNs with different architectures in parallel and combined results using one of the above methods. Due to the nature of the task, and because scores were discretized through manual assessment, two participants performed a combination of classification and regression. Team SCI performed classification by artificially creating multiple classification categories, whereas Team SRIBD opted to learn a label distribution automatically via label distribution learning.38 Training was then performed on a combination of two (or more via ensemble) sets of ground truth labels. 3.2.Prediction Probability AnalysisThe best performing method on the independent test set achieved an average PK of 0.941 [0.917,0.958], which was comparable to but also slightly higher than the average interrater PK of 0.927 [0.914,0.940] for path1 and path2, who provided the reference standard TC scores for the dataset. The PK of the best-performing algorithm failed to reach a statistically significant difference from that of the individual pathologists’ PKs. Figure 3(a) shows the average PK scores sorted by algorithm from highest to lowest rank with the actual PK scores given in Table 1. Figure 4(a) shows the individual PK scores using either path1 or path2 as the reference standard for the top 30 performing algorithms in terms of average PK score. PK was generally higher for path1 as the reference as opposed to path2 for this set of high-performing algorithms. Table 1Best PK scores and the corresponding ICC values achieved by each BreastPathQ participant team, averaged between two pathologists. Some ranks (e.g., rank 5) are not listed because a different algorithm from the same team achieved a higher rank.
Figure 5(a) focuses on the average PK for the top 30 performers and shows a relatively small range in performance of 0.917 to 0.941 across the 30 algorithms. The figure also indicates the first algorithm with PK performance that is statistically significantly different from the first- and the second-ranked algorithms at the level. The first ranked algorithm ( [0.917,0.958]) was statistically superior to the fifth ranked algorithm ( [0.910,0.955]) and all subsequent lower ranked algorithms. The second ranked algorithm ( [0.920,0.957]) was statistically superior to the sixth ranked algorithm ( [0.906,0.953]) such that a difference of about 0.006 in PK was statistically significant for the top performing algorithms. 3.3.Intraclass Correlation AnalysisICC values were not an endpoint of the BreastPathQ Challenge in that these results were not used to select the challenge winners; however, we decided to compute and report ICC values after completion of the competition to determine the impact of algorithm ranking on the use of either a rank-based or a calibrated endpoint. The best-performing method achieved an average ICC of 0.938 [0.913,0.956], which was higher than the average inter-rater ICC of 0.892 [0.866,0.914] between path1 and path2. Figure 3(b) shows the average ICC scores sorted by algorithm from highest to lowest rank with the best ICC score by participant given in Table 1. Figure 4(b) shows ICC scores using path1 or path2 as the reference standard for the top 30 performing algorithms in terms of average ICC score. In this case, the ICC was generally higher for path2 as the reference compared with path1 as the reference. This trend is the reverse of what was observed for the highest performing PK algorithms in which comparisons with path1 typically resulted in higher PK. Figure 5(b) focuses on the average ICC for the top 30 performers. The range in average ICC was 0.904 to 0.938 across the 30 algorithms. The figure also shows that the first ranked ICC algorithm ( [0.913,0.956]) was statistically superior to the 26th ranked algorithm ( [0.878,0.928]) and all subsequent lower ranked algorithms. The 2nd ranked algorithm ( [0.906,0.957]) was statistically superior to the 29th ranked algorithm ( [0.866,0.933]) such that a difference of about 0.031 in the ICC was statistically significant for the top performing algorithms. This ICC difference for statistical significance was substantially larger than the needed for PK significance. However, looking at the scatter plot of PK scores versus ICC scores in Fig. 6, we see that the ranks in the two reference standard approaches were fairly consistent in that high performers in PK tended to be high performers in ICC as well. 3.4.Patch-Based AnalysisFigures 7 and 8 show patches in which the patch-based MSE were the highest and the lowest, respectively, along with the average algorithm TC score (AvgScore). The algorithms performed poorly for the examples shown in Fig. 7, by overestimating TC for the region of closely packed benign acini seen in the sclerosing adenosis of Fig. 7(a) and in the patch depicting a high number of tumors associated with inflammatory cells in Fig. 7(b). The algorithms underestimated TC for the lobular carcinoma in Fig. 7(c), which is characterized by sheets of noncohesive cells with nuclei only slightly larger than inflammatory cells that do not form tubules or solid clusters. TC was also consistently underestimated in the apocrine carcinoma depicted in Fig. 7(d), which had markedly abundant cytoplasm such that the surface area of the tumor cells is significantly larger than the surface area of the nuclei. On the other hand, the algorithms performed quite well for the patches that depicted benign, completely normal breast lobules in the acellular stroma shown in Figs. 8(a) and 8(b) and in acellular stroma and for malignant patches in Figs. 8(c) and 8(d) showing cohesive residual tumor cells with high nuclear–cytoplasmic ratio encompassing the majority of the surface area. In these cases, the tumor–stroma interface was well delineated, and the stroma contained a minimal number of inflammatory cells. 4.DiscussionThe submitted algorithms generally performed quite well in assessing cancer cellularity for H&E breast cancer tumor patches with the majority, 62/100 submitted algorithms, having a PK scores greater than 0.90 on a scale of 0.0 to 1.0. The top performing algorithms had PK comparable to path1 and path2 pathologists who had an average interrater PK of 0.927 [0.914,0.940] on the test dataset. This indicates that a range of different deep learning approaches (e.g., ResNet50, squeeze-excitation Resnet50, DenseNet, Xception, Inception, and ensembles of architectures) may be able to perform similarly to pathologists in ranking pairs of slide patches in terms of cellularity. A similar trend was observed with the ICC metric in which 50/100 algorithms had mean ICC performance above 0.892, the average interrater ICC performance on the test dataset. This ICC performance again suggests that a range of deep learning techniques can produce similar cellularity scores to those of the pathologists participating in this study such that automated cancer cellularity scoring may be a reasonable AI/ML application to consider. The value of a successful AI/ML implementation could be in streamlining the assessment of residual cancer burden in breast and other cancers and reducing the variability in cellularity scoring compared with that of pathologists. While the challenge results are encouraging, this is an early stage study that simply indicates that some of the better performing algorithms may have merit for further optimization and testing. Algorithm performance would need to be confirmed on a much larger and more diverse dataset to verify both the algorithm performance and consistency with pathologist interpretation across different patch types. Such a dataset should consist of images acquired with different scanners and acquired at different sites so that it would be representative of the image quality observed in clinical practice. This challenge included only images scanned using a single WSI scanner and from a single site. In addition, our reference standard was limited to two pathologists, and these pathologists exhibited variability in their TC scores, indicating that a larger study should include a larger, more representative group of pathologist readers to better account for reader variability. While overall performance was good for the top performing algorithms, it was observed that AI/ML algorithms as an entire group tended to perform well or poorly for some patches. Figures 7 and 8 show some insight into the errors made by the algorithms. Figure 8 shows examples of “typical” appearing patches where the algorithms tend to do well, in terms of low average MSE with the pathologists. The zero cellularity patches in Figs. 8(a) and 8(b) show the classical appearance of normal breast tissue where epithelial cells form ducts and are surrounded by regions of stroma, while the high cellularity patches in Figs. 8(c) and 8(d) contain dense regions of randomly arranged malignant epithelial cells. The patches in Fig. 7 cause the most difficulty for the submitted algorithms in general; Fig. 7(a) shows cellularity in a region derived from a patient with adenosis. While this is benign, the dense concentration of epithelial cells seems to have been mistaken for cancer by many of the algorithms, leading to high TC scores compared with the pathologists scores. Similarly, the high concentration of tumor infiltrating lymphocytes in Fig. 7(b) led to an overestimation of cellularity by the algorithms. In Fig. 7(c), the tumor cells in the lobular carcinoma are distorted and noncohesive, while the effect of the NAT led to a high cytoplasm to nuclei ratio, which caused the algorithms to underestimate cellularity in Fig. 7(d). These figures suggest that the challenge algorithms, as a group, performed relatively well on easier patches (Fig. 8) and struggled on more difficult patches (Fig. 7) in which pathologists may benefit most from an AI/ML. The errors also demonstrate the degree of variability in tumor cell properties across breast cancer cases treated with NAT and demonstrate that large and representative datasets are needed to train and evaluate models for DP implementation. Algorithm evaluation with large datasets can also serve to document the types of cases in which AI/ML performs well and those types that are problematic. Ensemble methods, which combine the output of multiple trained neural networks into a single output, have become a common approach for challenge participants for improving AI/ML algorithm performance. It was the same for the BreastPathQ Challenge, in which most of the teams used an ensemble of deep learning algorithms instead of limiting themselves to just a single deep learning architecture and training. In general, the ensemble method had higher PK performance than the nonensemble methods, and the top five algorithms in terms of PK all used an ensemble of deep learning architectures. The advantage of ensembles or combinations of algorithms leading to improved performance was also observed in the DM DREAM Challenge, in which the ensemble method significantly improved the AUC over the best single method from 0.858 to 0.89539 for the binary task of cancer/no cancer presence in screening mammography. Our results indicate that ensembles of deep-learning architectures can improve estimation performance in independent testing compared with single classifier implementations at the cost of additional training time and validating the multiple neural networks. Our initial choice for concordance metric was Kendall’s Tau-b (). is a common metric for concordance40 and is given as where is the number of concordant pairs, is the number of discordant pairs, is the number of ties in the submitted algorithm results, and is the number of ties in the reference standard. However, one of the participants in the challenge (David Chambers, Southwest Research Institute, Team: dchambers) identified a problem with early after the initial release of the training data. The participant found, and we confirmed through simulations, that by simply binning continuous AI/ML algorithm outputs (e.g., binning scores to 10 equally spaced bins between 0 and 1 instead of using a continuous estimate between 0 and 1) one could artificially increase the number of ties that an algorithm produces. Binning also impacted the number of concordant and discordant pairs. Based on our simulation studies, we found that binning decreased the number of concordant pairs somewhat but also lead to a much larger decrease in the number of discordant pairs because regions having similar TC scores are more difficult to differentiate than regions having large differences in TC in general. Binning had a relatively small impact on the denominator such that the overall effect was to increase compared with using continuous TC estimates or even smaller bin sizes. To prevent the possibility of the challenge results being manipulated through the binning of algorithm outputs, we revised our initial concordance endpoint to use the PK metric, which does not suffer from this shortcoming. Increasing algorithm ties by binning still impacts and , but the large reduction in reduced the PK denominator to a larger degree than the numerator such that binning algorithm estimates tend to reduce PK instead of improving it.As described in Sec. 2.1, path1 provided all of the reference label scores for the training and validation data. Figure 4(a) shows that test PK performance for an algorithm was consistently larger having path1 as the reference standard compared with path2 for almost all top 30 performers, although the error bars largely overlap. One possible explanation for this consistent difference is that the participants may have been able to tune their algorithms to path1 TC scores during the training and validation phases since path1 was the reference label for these datasets. Although PK was not explicitly used as part of the loss function for algorithm training by any participants, it is likely that they selectively submitted algorithms during the test phase that produced higher PK performance in the training and validation phases. It is not surprising to see better PK performance for path1 compared with path2 since path1 was the reference labeler for all three datasets. Interestingly, the trend was opposite for the ICC. Figure 4(b) shows algorithm ICC performance for both reference labelers on the test dataset. The ICC with path2 as the reference are larger than the ICC with path1 as the reference for most of the top ICC preforming algorithms. Participants did not optimize their algorithm for the ICC nor did they receive feedback on ICC performance during the course of the challenge. In addition, when using path 2 as the reference, the difference in the ICC between the algorithms and path1 was statistically significant but not vice-versa. We hypothesize that this is likely a coincidence in our study due to having two different truthing pathologists and no algorithm optimization toward the ICC endpoint. For PK, the difference between the algorithms and the individual pathologists failed to reach statistical significance. This result suggests that ICC performance, in which calibration in the scores is accounted for, performs differently than PK, a rank-based performance metric. Despite this, many of the top performing PK algorithms were also among the top ICC performers. This can be seen by studying Fig. 4 where the top three algorithms in terms of PK are also the top three in terms of ICC. Likewise, 8 of the top 10 performing PK performers are in the top 10 performing ICC algorithms. We conjecture that if the challenge had returned ICC performance to the participants in the training/validation stage instead of PK, Fig. 4(b) would likely have shown better ICC performance for path1 over path2 because the participants would have adjusted their submissions to those with higher ICCs on the training and validation datasets. Therefore, we believe it is important to consider what performance feedback is provided to participants in an AI/ML challenge since this can impact which models are submitted. The results indicate a limitation of the challenge of having only a single pathologist provide a reference TC score for the training and validations datasets. This suggests that it is reasonable to collect reference information from multiple readers for training and validation datasets in addition to the test data, especially for estimation tasks in which reader variability is expected to be high. This could reduce overfitting results to a single truther and potentially produce more generalizable algorithm performance. The advantage of utilizing multiple truthers for all data in a challenge still needs to be weighed against the time and costs associated with collecting this additional information. 5.ConclusionThe SPIE-AAPM-NCI BreastPathQ Challenge showed that better performing AI/ML algorithms submitted as part of the challenge were able to approach the performance of the truthing pathologist for cellularity assessment and that they may have utility in clinical practice by improving efficiency and reducing reader variability if they can be validated on larger, clinically relevant datasets. The BreastPathQ Challenge was successful because experts in multiple fields worked together on the Organizing Committee. This enabled participants to quickly understand the basics of the task, download the data, develop their algorithms, and receive efficient feedback during the training and validation phases. The BreastPathQ Challenge information is accessible on the Grand Challenge website.41 The data used in the challenge including the WSI scans and additional clinical information related to each patient can be found on the cancer imaging archive (TCIA).42 6.Appendix A: PK Performance by TeamTable of the best average PK results and corresponding ICC scores for each participating team along with the teams’ members, affiliations, and a brief description of their submitted algorithm. 7.Appendix B: BreastPathQ Challenge Group MembersList of the BreastPathQ Challenge Group members considered as co-authors on this manuscript. Table 2List of registered teams who submitted a valid test submission to BreastPathQ, a brief summary of the team’s submitted algorithms along with their best performing average PK and corresponding average ICC scores. Note that teams were allowed to submit up to three algorithms in the BreastPathQ test phase.
n.p., not provided; n.a., not applicable. Table 3List of BreastPathQ Challenge group members considered to be coauthors in this paper. The table is in alphabetical order separated by challenge organizers, pathologists, and participants.
DisclosuresReported disclosures for individual members of the BreastPathQ Challenge Group are listed in Appendix B. AcknowledgmentsWe would like to thank Diane Cline, Lillian Dickinson, and SPIE; Dr. Samuel Armato and the AAPM; and the NCI for their help in organizing and promoting the challenge. The data were collected at the Sunnybrook Health Sciences Centre, Toronto, Ontario, as part of a research projected funded by the Canadian Breast Cancer Foundation (Grant No. 319289) and the Canadian Cancer Society (Grant No. 703006). The mention of commercial products, their sources, or their use in connection with material reported herein is not to be construed as either an actual or implied endorsement of such products by the Department of Health and Human Services. ReferencesA. M. Thompson and S. L. Moulder-Thompson,
“Neoadjuvant treatment of breast cancer,”
Ann Oncol, 23
(Suppl. 10), x231
–x236
(2012). https://doi.org/10.1093/annonc/mds324 Google Scholar
C. K. Park, W.-H. Jung and J. S. Koo,
“Pathologic evaluation of breast cancer after neoadjuvant therapy,”
J. Pathol. Transl. Med., 50
(3), 173
–180
(2016). https://doi.org/10.4132/jptm.2016.02.02 Google Scholar
S. Kumar et al.,
“Study of tumour cellularity in locally advanced breast carcinoma on neo-adjuvant chemotherapy,”
J. Clin. Diagn. Res., 8
(4), FC09
–FC13
(2014). https://doi.org/10.7860/JCDR/2014/7594.4283 Google Scholar
R. Rajan et al.,
“Change in tumor cellularity of breast carcinoma after neoadjuvant chemotherapy as a variable in the pathologic assessment of response,”
Cancer, 100
(7), 1365
–1373
(2004). https://doi.org/10.1002/cncr.20134 CANCAR 0008-543X Google Scholar
C. Yau et al.,
“Residual cancer burden after neoadjuvant therapy and long-term survival outcomes in breast cancer: a multi-center pooled analysis,”
in Proc. 2019 San Antonio Breast Cancer Symp.,
12
–13
(2019). Google Scholar
W. F. Symmans et al.,
“Measurement of residual breast cancer burden to predict survival after neoadjuvant chemotherapy,”
J. Clin. Oncol., 25
(28), 4414
–4422
(2007). https://doi.org/10.1200/JCO.2007.10.6823 JCONDN 0732-183X Google Scholar
A. Madabhushi and G. Lee,
“Image analysis and machine learning in digital pathology: challenges and opportunities,”
Med. Image Anal., 33 170
–175
(2016). https://doi.org/10.1016/j.media.2016.06.037 Google Scholar
M. L. Mendelsohn et al.,
“Morphological analysis of cells and chromosomes by digital computer,”
Method Inf. Med., 4 163
–167
(1965). https://doi.org/10.1055/s-0038-1636244 Google Scholar
R. C. Bostrom and W. G. Holcomb,
“CYDAC—a digital scanning cytophotometer,”
Proc. IEEE, 51
(3), 533
–533
(1963). https://doi.org/10.1109/PROC.1963.2144 IEEPAD 0018-9219 Google Scholar
M. K. K. Niazi, A. V. Parwani and M. N. Gurcan,
“Digital pathology and artificial intelligence,”
Lancet Oncol., 20
(5), e253
–e261
(2019). https://doi.org/10.1016/S1470-2045(19)30154-8 LOANBN 1470-2045 Google Scholar
H. R. Tizhoosh and L. Pantanowitz,
“Artificial intelligence and digital pathology: challenges and opportunities,”
J. Pathol. Inf., 9 38
–38
(2018). https://doi.org/10.4103/jpi.jpi_53_18 Google Scholar
A. Serag et al.,
“Translational AI and deep learning in diagnostic pathology,”
Front. Med. (Lausanne), 6 185
–185
(2019). https://doi.org/10.3389/fmed.2019.00185 Google Scholar
R. Colling et al.,
“Artificial intelligence in digital pathology: a roadmap to routine use in clinical practice,”
J. Pathol., 249
(2), 143
–150
(2019). https://doi.org/10.1002/path.5310 Google Scholar
B. Ehteshami Bejnordi et al.,
“Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer machine learning detection of breast cancer lymph node metastases machine learning detection of breast cancer lymph node metastases,”
J. Am. Med. Assoc., 318
(22), 2199
–2210
(2017). https://doi.org/10.1001/jama.2017.14585 Google Scholar
G. Aresta et al.,
“BACH: Grand Challenge on breast cancer histology images,”
Med. Image Anal., 56 122
–139
(2019). https://doi.org/10.1016/j.media.2019.05.010 Google Scholar
M. Peikari et al.,
“Automatic cellularity assessment from post-treated breast surgical specimens,”
Cytometry Part A, 91
(11), 1078
–1087
(2017). https://doi.org/10.1002/cyto.a.23244 1552-4922 Google Scholar
A. L. Martel et al.,
“An image analysis resource for cancer research: PIIP—Pathology Image Informatics Platform for visualization, analysis, and management,”
Cancer Res., 77
(21), e83
–e86
(2017). https://doi.org/10.1158/0008-5472.CAN-17-0323 CNREA8 0008-5472 Google Scholar
A. Goode et al.,
“OpenSlide: a vendor-neutral software foundation for digital pathology,”
J. Pathol. Inf., 4 27
–27
(2013). https://doi.org/10.4103/2153-3539.119005 Google Scholar
“Sedeen virtual slide viewer platform,”
(2021) https://pathcore.com/sedeen April ). 2021). Google Scholar
“MedICI: a platform for medical image computing challenges,”
(20152021). https://github.com/MedICI-NCI/MedICI Google Scholar
W. D. Smith, R. C. Dutton and N. T. Smith,
“A measure of association for assessing prediction accuracy that is a generalization of non-parametric ROC area,”
Stat. Med., 15
(11), 1199
–1215
(1996). https://doi.org/10.1002/(SICI)1097-0258(19960615)15:11<1199::AID-SIM218>3.0.CO;2-Y SMEDDA 1097-0258 Google Scholar
P. Virtanen et al.,
“SciPy 1.0: fundamental algorithms for scientific computing in Python,”
Nat. Methods, 17
(3), 261
–272
(2020). https://doi.org/10.1038/s41592-019-0686-2 Google Scholar
P. E. Shrout and J. L. Fleiss,
“Intraclass correlations: uses in assessing rater reliability,”
Psychol. Bull., 86
(2), 420
–428
(1979). https://doi.org/10.1037/0033-2909.86.2.420 PSBUAI 0033-2909 Google Scholar
M. Gamer et al., “Package ‘irr’,” Various Coefficients of Interrater Reliability and Agreement, 2012). Google Scholar
C. Szegedy et al.,
“Inception-v4, inception-Resnet and the impact of residual connections on learning,”
in Thirty-First AAAI Conf. Artif. Intell.,
(2017). Google Scholar
K. He et al.,
“Deep residual learning for image recognition,”
in Proc. IEEE Conf. Comput. Vision and Pattern Recognit.,
770
–778
(2016). https://doi.org/10.1109/CVPR.2016.90 Google Scholar
G. Huang et al.,
“Densely connected convolutional networks,”
in Proc. IEEE Conf. Comput. Vision and Pattern Recognit.,
4700
–4708
(2017). https://doi.org/10.1109/CVPR.2017.243 Google Scholar
F. Chollet,
“Xception: deep learning with depthwise separable convolutions,”
in Proc. IEEE Conf. Comput. Vision and Pattern Recognit.,
1251
–1258
(2017). https://doi.org/10.1109/CVPR.2017.195 Google Scholar
K. Simonyan and A. Zisserman,
“Very deep convolutional networks for large-scale image recognition,”
(2014). https://arxiv.org/abs/1409.1556 Google Scholar
J. Hu, L. Shen and G. Sun,
“Squeeze-and-excitation networks,”
in Proc. IEEE Conf. Comput. Vision and Pattern Recognit.,
7132
–7141
(2018). https://doi.org/10.1109/CVPR.2018.00745 Google Scholar
O. Russakovsky et al.,
“ImageNet large scale visual recognition challenge,”
Int. J. Comput. Vision, 115
(3), 211
–252
(2015). https://doi.org/10.1007/s11263-015-0816-y IJCVEQ 0920-5691 Google Scholar
J. C. Caicedo et al.,
“Nucleus segmentation across imaging experiments: the 2018 data science bowl,”
Nat. Methods, 16
(12), 1247
–1253
(2019). https://doi.org/10.1038/s41592-019-0612-7 1548-7091 Google Scholar
I. Goodfellow et al.,
“Generative adversarial nets,”
in Adv. Neural Inf. Process. Syst.,
2672
–2680
(2014). Google Scholar
M. Haindl and S. Mike,
“Unsupervised image segmentation contest,”
in 22nd Int. Conf. Pattern Recognit.,
1484
–1489
(2014). Google Scholar
C. Szegedy et al.,
“Going deeper with convolutions,”
in IEEE Conf. Comput. Vision and Pattern Recognit.,
(2015). https://doi.org/10.1109/CVPR.2015.7298594 Google Scholar
O. Ronneberger, P. Fischer and T. Brox,
“U-Net: convolutional networks for biomedical image segmentation,”
Lect. Notes Comput. Sci., 9351 234
–241
(2015). https://doi.org/10.1007/978-3-319-24574-4_28 LNCSD9 0302-9743 Google Scholar
J. Long, E. Shelhamer and T. Darrell,
“Fully convolutional networks for semantic segmentation,”
in Proc. IEEE Conf. Comput. Vision and Pattern Recognit.,
3431
–3440
(2015). Google Scholar
X. Geng,
“Label distribution learning,”
IEEE Trans. Knowl. Data Eng., 28
(7), 1734
–1748
(2016). https://doi.org/10.1109/TKDE.2016.2545658 ITKEEH 1041-4347 Google Scholar
T. Schaffter et al.,
“Evaluation of combined artificial intelligence and radiologist assessment to interpret screening mammograms,”
JAMA Network Open, 3
(3), e200265
(2020). https://doi.org/10.1001/jamanetworkopen.2020.0265 Google Scholar
M. G. Kendall, Rank Correlation Methods, Griffin, Oxford
(1948). Google Scholar
“SPIE-AAPM-NCI BreastPathQ: Cancer Cellularity Challenge 2019,”
(2021) http://breastpathq.grand-challenge.org/ April ). 2021). Google Scholar
A. L. Martel et al.,
“Assessment of residual breast cancer cellularity after neoadjuvant chemotherapy using digital pathology [data set],”
(2019). Google Scholar
BiographyNicholas Petrick is a deputy director for the Division of Imaging, Diagnostics and Software Reliability, Center for Devices and Radiological Health, U.S. Food and Drug Administration. He received his PhD from the University of Michigan in electrical engineering systems and is a fellow of AIMBE and SPIE. His current research focuses on quantitative imaging, medical AI/ML, and the development of robust assessment methods for a range of medical imaging hardware and AI/ML tools. Shazia Akbar was a postdoctoral fellow at Sunnybrook Research Institute, fully affiliated with Medical Biophysics, University of Toronto. She joined Altis Labs, Inc., in 2019 as the lead machine learning engineer and has since explored applications of deep learning for lung cancer risk assessment. Her research focuses on medical image analysis, applications of AI in medicine, and semi-supervised learning. Kenny H. Cha is an assistant director in the Division of Imaging, Diagnostics, and Software Reliability within the U.S. Food and Drug Administration, Center for Devices and Radiological Health. He received his BSE and MSE degrees and his PhD from the University of Michigan in biomedical engineering. His research interests include artificial intelligence, machine learning, and deep learning for medical data, computer-aided diagnosis, and radiomics, with a focus on performance assessment. Sharon Nofech-Mozes received her medical degree in Israel. She trained in anatomic pathology and completed fellowships in breast and gynecologic pathology. She is an associate professor in the Department of Laboratory Medicine and Pathobiology at the University of Toronto and a staff pathologist at Sunnybrook Health Sciences Centre since 2007. Her academic interest is in the area of prognostic and predictive markers in breast cancer, particularly in ductal carcinoma in situ. Berkman Sahiner received his PhD in electrical engineering and computer science from the University of Michigan, Ann Arbor, and is a fellow of AIMBE and SPIE. At the Division of Imaging, Diagnostics, and Software Reliability at FDA/CDRH/OSEL, he performs research related to the evaluation of medical imaging and computer-assisted diagnosis devices, including devices that incorporate machine learning and artificial intelligence. His interests include machine learning, computer-aided diagnosis, image perception, clinical study design, and performance assessment methodologies. Marios A. Gavrielides was a staff scientist at the FDA’s Center for Devices and Radiological Health/Office of Engineering and Laboratory Science. He joined AstraZeneca in August 2020 as a diagnostic computer vision leader. His research focuses in the development and assessment of artificial intelligence/machine learning (AI/ML) methods toward improved cancer detection, diagnosis, and prediction of patient outcomes. Recent applications include the classification of ovarian carcinoma histological subtypes and AI/ML-based prediction of targeted treatment response. Jayashree Kalpathy-Cramer is a director of the QTIM lab at the Athinoula A. Center for Biomedical Imaging at MGH and an associate professor of radiology at MGH/Harvard Medical School. She received her PhD in electrical engineering from Rensselaer Polytechnic Institute, Troy, New York. Her lab works at the intersection of machine learning and healthcare. Her research interests span the spectrum from algorithm development to clinical applications in radiology, oncology, and ophthalmology. She is also interested in issues of bias and brittleness in AI and assessments of algorithms for safe and ethical use. Karen Drukker is a research associate professor at the University of Chicago where she has been involved in medical imaging research for 20+ years. She received her PhD in physics from the University of Amsterdam. Her research interests include machine learning applications in the detection, diagnosis, and prognosis of breast cancer and, more recently, of COVID-19 patients, focusing on rigorous training/testing protocols, generalizability, and performance evaluation of machine learning algorithms. Anne L. Martel is a senior scientist at Sunnybrook Research Institute and a professor in medical biophysics at the University of Toronto. She is also a fellow of the MICCAI Society, a senior member of SPIE, and a Vector Faculty Affiliate. Her research program is focused on medical image and digital pathology analysis, particularly on machine learning for segmentation, diagnosis, and prediction/prognosis. In 2006, she cofounded Pathcore, a digital pathology software company. The BreastPathQ Challenge Group comprises the organizing committee, the pathologist providing the reference standard scores for the challenge dataset, and all challenge participants with a valid on-time test submission. Each member is considered a coauthor on this paper with the full list of BreastPathQ Challenge Group members found in Appendix B. |