Porcine skin possesses many of the morphological and functional features of human skin and is considered an appropriate surrogate for human skin in a wide range of preclinical animal studies. Nevertheless, there are differences in structure that may affect optical measurements of the tissue health. It is, therefore, important to understand these differences in optical properties for translation of swine data and experiments to human clinical studies. Here, we compare and contrast in-vivo measurements of the optical properties of normal and burned human and porcine skin obtained using two commercial SFDI systems.
Effective burn wound management is informed by accurate severity assessment. Superficial partial-thickness burns do not require surgical intervention, while deep partial-thickness and full-thickness burn wounds necessitate skin grafting to minimize infection, contraction, and hypertrophic scarring. Visual-tactile wound assessment is subjective and error-prone, especially for inexperienced practitioners. A field- and hospital-deployable device, capable of quantifying both extent and severity of burns, could enable rapid, objective burn severity measurement with commensurate improvement in patient outcomes. Our group has previously shown that spatial frequency domain imaging (SFDI), a non-invasive, wide-field optical imaging technique, can accurately assess burn wound in a porcine model of controlled, graded burn severity[1, 2]. The device employed (OxImager RS) eight modulated wavelengths and five spatial frequencies and the classification of severity relies heavily on reduced scattering coefficient (tissue microstructure)[3]. In the work that we present here, we demonstrate the burn severity prediction performance of a dramatically streamlined version of SFDI that employs a single modulated wavelength in addition to five unmodulated wavelengths. This device, known as Clarifi (Modulim, Irvine CA), is currently in refinement for ruggedization and usability for a variety of situations in which the environment is more demanding than hospital clinics. In addition, we have developed a machine learning model capable of categorizing burn severity in a porcine model of graded burns using a reduced dataset of unprocessed calibrated reflectance images generated by the device. Outputs of the model are designed to be easily interpretable and clinically actionable, exhibiting a pixelwise cross-validation accuracy of up to 99%.
SignificanceOver the past decade, machine learning (ML) algorithms have rapidly become much more widespread for numerous biomedical applications, including the diagnosis and categorization of disease and injury.AimHere, we seek to characterize the recent growth of ML techniques that use imaging data to classify burn wound severity and report on the accuracies of different approaches.ApproachTo this end, we present a comprehensive literature review of preclinical and clinical studies using ML techniques to classify the severity of burn wounds.ResultsThe majority of these reports used digital color photographs as input data to the classification algorithms, but recently there has been an increasing prevalence of the use of ML approaches using input data from more advanced optical imaging modalities (e.g., multispectral and hyperspectral imaging, optical coherence tomography), in addition to multimodal techniques. The classification accuracy of the different methods is reported; it typically ranges from ∼70% to 90% relative to the current gold standard of clinical judgment.ConclusionsThe field would benefit from systematic analysis of the effects of different input data modalities, training/testing sets, and ML classifiers on the reported accuracy. Despite this current limitation, ML-based algorithms show significant promise for assisting in objectively classifying burn wound severity.
KEYWORDS: Machine learning, Color imaging, Hyperspectral imaging, Data modeling, Multispectral imaging, Digital color imaging, Diffuse optical imaging, Biopsy
Accurately classifying burn severity is crucial to inform proper treatment. Here, we quantitatively compare the efficacy of machine learning (ML) burn classification algorithms using multispectral imaging versus conventional digital color imaging data as inputs. We imaged 80 porcine burns that underwent biopsy and histology for ground truth categorization into “skin graft needed” versus “no graft needed” groups. The accuracy of our ML algorithm with a transfer learning architecture was 97.5% for the multispectral model, 57.5% for the digital-color model, and 57.5% for the multispectral+digital-color model. This result strongly supports the use of multispectral imaging over digital-color imaging for burn classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.