1.IntroductionCancer remains one of the leading causes of morbidity and mortality in the world, with million new cases of cancer diagnosed worldwide in 2020. In addition, the International Agency for Research on Cancer estimates that by 2040 the diagnosis of new cases will increase to 27.0 million.1 In Spain, 1.49% of newly diagnosed cancers in 2022 (4.169 of 280.101) were tumors of the encephalo or nervous system.1 Identifying pathological tissue from healthy tissue is challenging, especially with aggressive tumors such as grade IV glioblastoma (GB) that have high infiltration capabilities.2 In addition, GB has poor long-term survival rates,3 making surgery an unavoidable process to increase patient survival. However, the brain shifting toward the skull opening can result in cerebrospinal fluid leakage and hinder tumor identification due to the alterations in the structure of the surrounding tissue, rendering preoperative imaging inadequate for intra-operative conditions.4 Therefore, intraoperative tools for brain tumor surgery are essential to help neurosurgeons in delineating and locating the tumor. Neuronavigators are precise instruments that facilitate the real-time monitoring of surgical interventions through the use of magnetic resonance imaging (MRI) or computed tomography scans conducted prior to the procedure. Nonetheless, they present limitations in pinpointing its exact location once the brain is exposed.5 Solutions that have been developed to address the issues associated with neuronavigators include the use of intraoperative MRI. This has the advantage of being able to locate the tumor after the craniotomy, thereby solving the problem of brain shift. Nevertheless, it should be noted that the use of MRI increases the time required for surgery and that it requires the use of specific equipment during the surgical procedure.6,7 Another tool that is not affected by the brain shift issue that operates in real time at a low cost is the intraoperative ultrasound. However, the data must be interpreted by experienced users since the images are of low resolution.8,9 Fluorescent tumor markers from add-on agents such as 5-aminolevunilic acid are able to deliver highly accurate intraoperative tumor margin detection with a rapid refresh rate. Yet, these are invasive methods since they require the injection of the agent10 into the patient and present a limited ability to define the tumor margin during surgery for low-grade gliomas.11 Therefore, faster and non-invasive techniques, compared with the tools described previously, are crucial to the success of surgical interventions. A widely used non-invasive and non-ionizing technique that requires no contact with patients is hyperspectral (HS) imaging (HSI).12 The advancement in HS sensors in the past years and the variety of available options can make the decision difficult of which HS camera to use. In particular, HS cameras are capable of capturing both spatial and spectral data using various techniques,13 scanning-based (SB) and wide-field (WF) being the typical imaging methods for HSI. First, SB approaches can acquire the spectrum for each pixel using whiskbroom (point-scanning) instruments, a line of pixels in pushbroom (line-scanning) instruments, or using a wedge filter that disperses light spectrally along one dimension (wedge-scanning). Second, WF approaches capture the whole scene in a single exposure with 2D detector arrays, either by stepping through the wavelength spectrum to complete the data cube (wavelength scan) or by acquiring the spatial and spectral information at the same time (snapshot). However, recent techniques such as snapscan cameras,14 which combine SB and WF approaches, provide compact solutions with faster acquisition times than linescan cameras while offering higher spatial and spectral resolution than snapshot sensors. It is worth noting that the most used spectral range in medical applications falls in the visible (VIS) (400 to 780 nm) and near-infrared (NIR) spectrums (780 to 2500 nm).12 Regardless of the possibilities available to select HSI equipment, pushbroom linescan, snapscan, and snapshot HS cameras have been used as part of intraoperative tools in several studies to differentiate brain tumors from healthy tissue in in vivo human brains.15–18 For example, Fabelo et al. developed an intraoperative acquisition system based on two pusbroom linescan cameras, a visible and near-infrared (VNIR) and a NIR in the spectral range between 400 to 1000 nm and 900 to 1700 nm, respectively.15 The intraoperative acquisition and data processing took , which does not enable real-time solutions understood as providing a live sequence of images. Furthermore, other works by Vandebriel et al. have assembled a snapscan HS camera to a surgical microscope for improving the removal of low-grade gliomas.16 The camera captured 104 spectral bands between 470 and 787 nm since it had to match the intrinsic illumination of the microscope. Even though the snapscan camera can provide high spatial resolution in a wide spectral range and reduce the acquisition time to less than 3 s for static targets,14 it requires an internal movement of a linescan sensor to acquire an HS cube, which is not suitable for real-time solutions. Moreover, recent works have developed an intraoperative system based on HSI with a pushbroom linescan and a snapshot HS cameras18,19 for brain tumor detection. On one hand, the snapshot camera captures 25 spectral bands in the 660 to 950 nm spectral range with a spatial dimension of each band. On the other hand, the pushbroom linescan camera acquires 369 bands between 400 and 1000 nm with a spatial dimension of 1600 pixels on each line. Despite the low number of bands acquired by the snapshot camera and its low spatial resolution, it can enable real-time solutions such as live video classifications with machine learning (ML) algorithms.19 Therefore, this research study aims to compare two different HS cameras that differ in how they acquire data, a pushbroom linescan and a snapshot, to determine their potential use for brain tissues and chromophore identification. To compare how similar their signatures are, the analysis is conducted in the NIR spectrum, specifically in the 659.95 to 951.42 nm range. The study involved examining and using ten different in vivo human brain images from the Slim Brain database20 to provide useful results for biomedical purposes. All the brains of patients used from the database were captured with both HS cameras to ensure similar lighting conditions for the comparisons. 2.Materials and Methods2.1.Hyperspectral Camera SpecificationsTwo different HS cameras based on different acquisition techniques have been used to capture in vivo brain images. The technical specifications of both cameras, sensors, and lenses are presented in Table 1. Table 1Sensor and camera optics specifications of the different HS cameras used. The last five parameters are fixed during the acquisition of images in the operating room.
On one hand, the snapshot HS camera has a complementary metal oxide semiconductor (CMOS) sensor holding a -mosaic pattern with a pixel size of (MQ022HG-IM-SM5X5-NIR, Ximea GmbH, Münster, Germany). The analog to digital converter (ADC) of the sensor provides images with a resolution of 8 bits. In addition, a long pass filter (FELH0650, Thorlabs, Inc., Newton, New Jersey, United States) with a 650-nm cut-on wavelength is placed in front of the lens to remove non-negligible secondary harmonics, which were determined by the manufacturer in the spectral response curves during sensor production. Although the sensor resolution is , the active area of the sensor with the built-in spectral filters has . This reduced area is called the active filter zone, which omits the last three rows and last three columns from the total sensor resolution. Each capture with this snapshot camera produces an image containing the spatial and spectral resolution due to the -mosaic pattern. Each mosaic contains approximately the same spatial pixel at different 25 wavelength bands within the NIR spectrum, specifically in the 660 to 950 nm spectral range. In addition, these bands are spaced among each other with a mean and standard deviation value of . Hence, the mosaic pattern reduces the 2D output spatial resolution by a factor of 5 to obtain a 3D HS cube. In particular, the image with generated by the sensor is arranged into a HS cube to perform the spectral analysis. The main advantage of the snapshot camera is its capability for real-time solutions, understood as processing a sequence of HS images to provide a live video of the scene. On the other hand, the other camera is based on the pushbroom linescan technology (Micro-Hyperspec® E-Series, HeadWall Photonics Inc., Bolton, Massachusetts, United States), which holds a scientific CMOS sensor with an ADC of 16 bits and a pixel size of . The sensor acquires a single-spatial line with 1600 pixels and 394 wavelengths of information. Thus, the camera needs to be moved with an actuator to scan an image with as many lines as desired. All in vivo brain captures used in this study were scanned with 500 lines, producing images with a spatial resolution of and 394 spectral bands. Besides, the exposure time and frame period were set to 150 and 160 ms, respectively. Although the sensor is sensitive in the 365 to 1004 nm spectral range, those wavelengths acquired outside the 400 to 1000 nm spectral range need to be removed, as specified by the manufacturer. Eliminating such bands results in 369 effective wavelength bands separated by from each other. It is worth noting that captures from both cameras were cropped spatially to help neurosurgeons during the labeling processing. Therefore, the spatial resolution of the HS linescan or snapshot captures are smaller than or , respectively. Although the pusbroom linescan camera provides more spatial resolution and spectral information, it is not suitable for real-time solutions due to the scanning procedure. The time spent to scan a brain image requires, and 40 s, whereas an image captured with the HS snapshot camera takes 100 ms. 2.2.Acquisition SystemThe acquisition system used to gather the in vivo brain images is presented in Fig. 1. Starting from the left, the HS -mosaic snapshot camera is located inside a 3D-printed white case. Inside the case, there is a servomotor employed to help focus the camera. An external light source with a 150 W 21 V EKE halogen lamp (MI150, Dolan-Jenner, Boxborough, Massachusetts, United States) and a dual gooseneck fiber optic (EEG28, Dolan-Jenner) were used to illuminate the brains. While the housing of the light source does not appear in the image, the dual fiber optics are shown glowing in Fig. 1. The system has been utilized as a real-time augmented reality (AR) application through laser imaging detection and ranging (LiDAR) equipped with an RGB camera. This technology enables the collection of geometric data of the brain surface by extracting a point cloud from the scene using depth and RGB information. The manufacturer indicates that the depth accuracy of the system depends on reflectivity and light conditions and is effective within a range of 0.5 to 3.86 m. Specifically, the random error, measured as the standard deviation, is , whereas the typical systematic error is plus 0.1% of the measured distance, provided there is no multi-path interference. By combining depth information from the point cloud, ML classification results on HS data, and the RGB from the LiDAR camera in an AR interface, the effectiveness of surgical field exploration and tumor delineation can be enhanced.19 In this study, LiDAR is specifically used to measure the distance between the acquisition system and the brain surface. This measurement helps to focus the system accurately by determining the distance to a single point on the brain surface. Given that the linescan camera has a fixed focusing distance of 60 cm, the measurement of the LiDAR is necessary to ensure focused captures with the linescan camera. Therefore, all images were taken at a distance of 60 cm to ensure a fair comparison between the two cameras, taking into account the limited focus distance of the linescan camera. Furthermore, the motorized linear stage (X-LRQ300HL-DE51, Zaber Technologies Inc., Vancouver, Canada) below the imaging sensors is mainly used to move the HS linescan camera. This movement allows the scanning of the brain image to compose a HS cube with the desired spatial resolution. The sensors and fiber optics are all aligned in the same plane, which is perpendicular to that of the linear stage holding them. Although not included in Fig. 1, two additional motorized linear stages are used to tilt and provide height to the stage. 2.3.Data ProcessingAll HS images, regardless of the camera used, are pre-processed using almost the same procedures, which are presented in Table 2. Table 2Pre-processing steps performed on the images captured with the different HS cameras used.
The first step is to obtain the reflectance information of the material interrogated by calibrating the raw information obtained with the sensor. Equation (1) eliminates the effect of the HS sensor and the lighting conditions captured with the raw images to obtain the reflectance information from the sample, where is the captured raw data of the sample, is the raw dark reference captured with the lens cap in front of the camera lens, and is the raw white reference intensity reflected over a Lambertian diffuse target with 95% of reflectance values (SG 3151-U, SphereOptics GmbH, Herrsching am Ammersee, Germany). All white references are captured under the same conditions as the images captured, meaning that each calibrated image needs a captured with the same working distance and tilt angle. While is used to remove the ambient temperature and electrical noise introduced to the measurement, tries to reduce the influence of the light sources on the sample. It should be noted that prior to imaging the brains, the exposure times of both cameras were evaluated to ensure that no measurements on were saturated.The second step is related to the formation of the HS cubes. The snapshot sensor captures 2D images that need to be rearranged into a 3D cube, which has a spatial dimension five times smaller than the 2D images due to the mosaic pattern of the sensor described in Sec. 2.1. However, the pushbroom camera has a simpler process to conform an HS cube. Such a process is called line stitching, which requires a precise linear actuator to move the HS camera to join adjacent spatial lines while avoiding overlap. The spectral correction is the third step and only applies to the snapshot camera. This correction is necessary to correct the response curves of the HS snapshot sensor, which presents crosstalks between adjacent pixels of the sensor that vary with the angle of incident light to the sensor.22 To correct this effect, the manufacturer provides a spectral correction matrix that modifies the response of the sensor to obtain ideal Gaussian curves. The spectral correction process is described in Eq. (2) where is the spectrally corrected reflectance data, is the reflectance of the sample after using Eq. (1), and SCM is the spectral correction matrix. The fourth step is the removal of bands, which only applies to the pushbroom camera. As described in Sec. 2.1, the sensor captures more information than what can be effectively used. Briefly, this process eliminates the spectral bands not taken between the 400 and 1000 nm spectral range. The fifth step consists of applying the noise filtering algorithm of HySime, which was presented by Bioucas-Dias et al.21 We specifically use the noise estimation procedure of HySime and assume that the noise in the HS cube is additive. In such a procedure, HySime deduces the noise present in an HS cube by making the assumption that the reflectance at a specific band can be effectively characterized through linear regression using the remaining bands. After estimating the noise in an HS cube, we employ this estimation to subtract the noise from the original HS cube, thus carrying out the noise-filtering process for each image independently. The last step is a normalization process that homogenizes the spectral signatures to help compare captures. For this study, the normalized reflectance is obtained using a min-max normalization for every spectral pixel independently, as described in Eq. (3), which forces data to a range of 0 to 1 where is the spectrally corrected reflectance data obtained with Eq. (2) for the HS snapshot camera or the reflectance information obtained with Eq. (1) for the HS linescan camera. For further clarification, in Eq. (3), the noise filtering is applied to either or before applying the normalization.2.4.In Vivo Human Brain ImagesFor this study, we used 10 in vivo brains from adult patients who have provided informed consent prior to surgery. These 10 patients were selected because two or more sterilized rubber rings were placed by neurosurgeons prior to acquire the HS images. Different colors were used for tissue identification, with green and black being used for healthy and pathological tissues, respectively. These rubber rings can be seen in the pseudo RGBs of Fig. 2 for patients with ID 190. In addition, the same images for the rest of the patients are presented in Fig. S1 in the Supplementary Material. As indicated previously, images of patients who have suffered different pathologies have been used. More specifically, patients with ID 177, 183, 184, 190, 193, and 203 have GB with isocitrate dehydrogenase (IDH) non-mutated. Patient ID 185 has grade II astrocytoma with IDH mutation, patient ID 192 has a metastatic testicular tumor, and patient ID 194 has a metastatic lung carcinoma tumor. Finally, patient ID 201 exhibits pseudoprogression in a GB with IDH non-mutated, indicating apparent GB progression likely attributable to treatment effects rather than actual tumor growth. The guidelines of the Declaration of Helsinki have been followed, and the acquisition of HS images has been approved by the Research Ethics Committee of Hospital Universitario 12 de Octubre, Madrid, Spain (protocol code 19/158, May 28, 2019). All patients shaved the area to be operated on prior to the scalp incision. Then, a high-speed drill was employed to make burr holes in the skull, which are used to insert a cranial drill to perform the craniotomy. This procedure extracts a bone flap to expose the dura of the patient, and then, the durotomy is performed by cutting with a knife the dura to uncover the brain surface. Then, both HS cameras proceeded to acquire the in vivo brain surface. Using the rubber rings ensures that the data of both cameras comes from the same spatial location of the brain surface. Furthermore, this procedure is similar to the one employed by Mühle et al.,23 who used a plastic cursor to delimit a bordered region of interest (ROI) over biological organs to compare different HS cameras. In addition, the biomedical experts selected similar areas of size inside the plastic cursor to perform their analysis. In our case, the areas we selected inside the rubber rings have a size of since bigger ROIs got pixels outside the rings in the snapshot images. Although in the previous study, Mühle et al. took 10 HS snapshot images to average them; in our study, it was not feasible since the in vivo human brain is in motion due to the heartbeat. Moreover, the linescan camera is inevitably affected by the motion of the brain during the scanning procedure, hindering the possibility of averaging multiple captures. For these reasons, one HS snapshot and one HS linescan image were taken for each patient. A single linescan image takes and 40 s, whereas a snapshot measurement takes 100 ms. Notice that specular reflections have not been removed, as neither the cameras nor the light source had polarizers. Although the patients exhibit diverse pathologies, the spectral comparisons are conducted on an intra-patient basis. This entails the comparison of the spectral signatures of both cameras for each patient individually. 2.5.Spectral Similarity MetricsA way to compare both HS cameras is to employ spectral similarity metrics, which can assess how different reflectances are related to each other. Agarla et al. have examined 14 frequently used measures and grouped them into five categories based on the type of error they evaluate,24 including the mathematical definition and implementation of all metrics. Furthermore, as stated by Agarla et al., selecting one measure for each of the groups, they describe can be sufficient to assess the spectral similarity.24 For that reason, we decided to use the root mean square error (RMSE), the goodness-of-fit coefficient (GFC),25 and the spectral angle mapper (SAM)26 metrics. On one hand, RMSE can range from 0 to 1 since the maximum value of our data is 1, indicating a value of RMSE equal to 0 perfect similarity. On the other hand, GFC values can be between the 0 and 1 range, where a value of 1 indicates complete similarity. SAM values range from 0 to 1, indicating values closer to 0 with high similarity and values closer to 1 with low similarity. These metrics are chosen since they are applicable to the spectral domain, can be used as loss functions (except SAM since angular metrics is not used to measure losses), and do not need extra requirements to be computed. Furthermore, the spectral signatures of the HS linescan camera are considered as the reference spectra, whereas those obtained with the HS snapshot camera are considered as the arbitrary signal. The previous assumption is grounded on the fact that the HS linescan camera gathers greater spatial and spectral data compared with the snapshot camera. We compute and compare the mean spectral signatures as illustrated in Fig. 3. For each patient, we independently obtain the mean spectral signatures of the pixels in the ROIs located inside the rubber rings, as shown on the left side of Fig. 3. Since the HS snapshot camera captures less spatial resolution, less pixels are included in the green and red ROIs inside the rubber rings compared with those captured by the HS linescan. Specifically, the ROIs created for all snapshot captures have a spatial dimension. Therefore, to provide a fair comparison, we created bigger ROIs for the HS linescan but randomly selected 25 pixels. Then, with those 25 pixels for each tissue and camera, the mean spectral signature and standard deviation are computed as presented with the plots in the middle of Fig. 3, which include black dashed rectangles to indicate the part of the spectrum shared by both cameras and used to compare them with the spectral similarity metrics. Finally, the RMSE, GFC, and SAM metrics are computed for each tissue using the matched wavelengths of the cameras presented in Table S1 in the Supplementary Material. 2.6.Analyzing Absorbance Measurements to Identify Chromophore Absorption PeaksTo analyze the absorbance () spectral signatures and look for patterns that might indicate the presence of certain chromophores, we will use the reflectance () measured by the cameras. is commonly derived,27–29 for each wavelength, from using Eq. (4) This expression is derived from the Beer-Lambert law,30 which expresses as . Here, represents the incident light, and is the light that has passed through the sample. In our specific context, we are dealing with reflectance information as defined in Eq. (1). The maximum reflected light corresponds to (analogous to what would be in the Beer-Lambert law), and represents the light that has passed through the brain (similar to in the Beer-Lambert law). To account for the noise of the sensor, we include the dark measure from Eq. (1) (). This allows us to mitigate sensor noise, and as a result, we arrive at Eq (4) . The chromophores we will attempt to identify are those that contribute the most to absorb light in the NIR region (from 800 to 2500 nm) in adult brains. As stated by Correia et al., these are hemoglobin, water, lipid, and the following cytochromes: cytochrome aa3 (Cyt aa3), cytochrome b (Cyt b), and cytochrome c (Cyt c).31 The absorption spectra of the previous cytochromes on their redox state between 400 and 1000 nm were obtained from the Biomedical Optics Research Laboratory (BORL) Github repository.32 It is worth noting that we converted the molar extinction coefficient for the Hb to the absorption coefficient as specified by Prahl et al.,33 considering that for the whole blood, there is 150 g of Hb per liter. Although this assumption may be doubtful for the measurements taken, it allows us to visually compare the spectrum of all chromophores with the same units. Also, it is worth noting that the absorbance, , and the absorption coefficient, , represent different phenomena. On one hand, is a property of a material that measures the fraction of light that can pass through in terms of intensity. On the other hand, is a property of the material that describes its effectiveness in absorbing light. We know from the Beer-Lambert law30 that and are related through Eq. (5): where is the molar absorptivity, also called the extinction coefficient, with units of , is the concentration of a solution in the sample in , is the length of the sample that light passes through in cm, and is the absorption coefficient in . Unfortunately, is unknown for the brain images used in this study. Therefore, we cannot convert the measured with the HS cameras to , for comparing the chromophores spectra with the absorbance measurements of the cameras. However, we can use the SAM metric described in Sec. 2.5 to try to identify the peaks of the chromophores in the measurements. Of all the metrics used in this study, SAM is the only one that focuses on the shape of the spectra rather than the numerical values.24 Although we cannot indicate the percentage concentration of each chromophore in the camera measurements, we try to identify their peaks by selecting wavelengths that include the peak wavelength and those around it. The most relevant absorption peaks of each chromophore are presented in Fig. S2 in the Supplementary Material.3.Results3.1.Diffuse Reflectance Standard MeasurementsTo validate the spectral performance characterization of the HS cameras and enhance the reliability of the subsequent analysis of the patient data, reference measurements were conducted on the Zenith Polymer Reflectance Standard, which exhibits nearly ideal Lambertian 99% of diffuse reflectance (SphereOptics GmbH, Herrsching am Ammersee, BY, Germany). The spectral response of the polymer reference is provided by the manufacturer and is illustrated with a gray line in Fig. 4. Moreover, the Pearson correlation has been used between the system measurements and the reference polymer signature to evaluate the performance of the HS cameras. Furthermore, a miniature spectrometer (Ocean Insight, Orlando, Florida) was employed to analyze the spectral responses of the polymer reference. This device is capable of measuring in the visible (VIS) and near-infrared (NIR) spectrum from 350 to 925 nm, with an ADC resolution of 16 bits. For the purposes of this study, measurements taken with the spectrometer from 400 to 925 nm are presented with a red dashed line in Fig. 4, as this includes the spectral range under analysis in the following sections. The correlation obtained when using 1913 bands from the spectrometer is 96.91%. In the same figure, the results of the measurements taken with both HS cameras are illustrated with green dashed-dotted lines and orange dotted lines for the linescan and snapshot cameras, respectively. The correlation obtained with the linescan camera using 369 bands is 95.58%, whereas for the snapshot camera, it is 68.19% using 25 bands. Although the correlation with the snapshot camera is lower than that obtained with the spectrometer or the other HS camera, the orange line clearly demonstrates that the spectral response is relatively similar between 660 and 866 nm, with a Pearson correlation value of 95.55% using the 17 bands in the aforementioned range. 3.2.Brain Tissue MeasurementsThe reflectance measurements obtained with both HS cameras from the 10 in vivo human brains are shown in the Supplementary Material, specifically in Fig. 5. The illustrated spectral signatures have been obtained, for every camera measurement, using 25 pixels located inside the rubber rings as shown in the images in Fig. 2. We decided to analyze the effect of normalizing the calibrated and denoised data to check how it would influence on the spectral similarity metrics. Hence, the two columns to the left in Fig. 5 are data that have been calibrated and denoised, whereas the two columns to the right are the same data that have been normalized using Eq. (3). The spectral range analyzed to address the comparison is between 659.95 and 951.42 nm using 25 bands detailed in Table S1 in the Supplementary Material for each camera. When looking at the unnormalized data, similar spectral signatures are measured with both HS cameras for eight out of the 10 in vivo human brains. Note how the spectral signatures of both cameras for patients 192 and 193 are less similar compared with other patients for either the healthy or pathological tissues. Furthermore, the healthy tissue measurements from the different patients have values between 0.2 and 0.6 in most cases, excluding patient 192, which has values between 0.4 and almost 1.0. Moreover, the reflectance values for pathological tissues range from 0.1 to 0.4 for most patients, excluding patient 185 whose values are between 0.5 and 0.8. The variations in reflectance or spectral signatures could result from the biological differences between the patients and their distinct brain tumor pathologies. Overall, the mean spectral signatures of both cameras differ more from each other once the normalization is applied. For instance, the healthy tissue measurements of patient 183 show a 0.3 reflectance difference between the spectral signatures of both cameras, whereas the unnormalized data for the same patient has a variation of less than 0.05. Such behavior can be seen for most patients and tissues except the healthy spectral signatures of patient 192 because it already exhibits a great difference in the unnormalized measurements. In addition, we can see an increase in the standard deviation shown in shaded colors around the mean curves due to the normalization. It is worth noting that the location of each pixel on the curved brain surface implies light variations, leading to slightly different measured reflectances. This, in turn, increases the difference between pixels within the same ROI when normalizing the reflectance to a range between 0 and 1. While measurements between cameras display a consistent trend across patients and tissues, snapshot measurements show more noise after normalization compared with the smoother linescan camera measurements. This results in more pronounced peaks and valleys in the spectrum of the snapshot measurements. However, there is a common peak across most measurements at , which is influenced by the laser used by the LiDAR while capturing only with the snapshot camera. Such a peak can be seen in patient 203 measurements in Fig. 5. Variations of the amplitude of such peak are due to the different angles at which the patients were captured. By comparing measurements with and without normalization, it is worth noting that the aforementioned peak at is more pronounced once the normalization is applied. To get an overview of the measurements, we illustrate in Fig. 6, the averaged results of all patients for both tissues presented in Fig. 6. The circular marks indicate the closest wavelengths between cameras specified in Table S1 in the Supplementary Material. By zooming into the common spectral range of the cameras between 659.95 and 951.42 nm from the previous plots, we present subfigures (c) and (d) to address in further detail the difference between measurements. Note the influence of the infrared peak emitted by the LiDAR at , illustrated in the dashed rectangles. Such a peak is only present in the snapshot measurements since the LiDAR could only be turned off when capturing with the linescan. By normalizing the pixels from subfigures (c) and (d), we obtained the mean spectral signatures with standard deviation for both tissues in subfigures (e) and (f). The analysis comparing the absorbance measurements using both HS cameras with the absorption coefficient spectra of deoxy-hemoglobin (Hb), oxy-hemoglobin (),33 Cyt aa3, Cyt b, Cyt c,32 and mammalian fat34 is illustrated in Figs. 7 and 8. In addition, a detailed analysis of the results obtained with respect to the absorbance measurements is provided in Sec. S4 in the Supplementary Material, including Fig. S2 with all relevant absorption peaks for the chromophores under analysis. 3.3.Reflectance Spectral Similarities to Compare Both CamerasThe spectral similarities between the reflectances measured with both HS cameras are presented in Fig. 9. This figure shows raincloud plots35 for the healthy and pathological tissues with green and red colors, respectively. Raincloud plots are a useful graphical representation that addresses the challenge of data obfuscation in the presentation of error bars or box plots. These visualizations combine various data elements to display raw data points, probability density through half violin plots, and key summary statistics such as median, first and third quartiles, outliers with black diamonds, and relevant confidence intervals via boxplots. This combination produces a visually appealing and adaptable representation with minimal repetition. In addition, these plots have a red dot inside each box plot to illustrate the mean value of each distribution. The pair of red dots shown in every plot of Fig. 9 are also connected to each other with a red line to visually see which distribution has a higher mean value. Furthermore, in these charts, each dot located below the box plots represents a single value of the corresponding similarity metric, which refers to the comparison of the mean spectral signatures for a particular patient and tissue type. For example, the leftmost green dot in Fig. 9(a) represents the SAM value of 0.029 obtained when comparing the healthy tissue of both cameras for the patient with ID 193. For comparison purposes, each row in Fig. 9 shows a pair of raincloud plots for every metric described in Sec. 2.5. Plots located to the left of the figure are the similarity results computed when data were calibrated and denoised [Figs. 9(a), 9(c), 9(e), and 9(g)], whereas those to the right are obtained when data were additionally normalized [Figs. 9(b), 9(d), 9(f), and 9(h)]. The obtained metrics for the comparison of each patient, including both tissues under analysis, are presented in Table S2 in the Supplementary Material for further analysis. To evaluate the dispersion of the distributions, we will look at the interquartile range (IQR) which is computed as , being Q1 and Q3 the first and third quartiles of the distribution, respectively. The IQR is a measure of the spread of data that is robust against extreme values. It provides valuable information about the variability of the central portion of a distribution and is helpful for identifying potential outliers. Besides, a comprehensive examination of the outcomes comparing the reflectance measurements from both cameras is presented in Sec. S5 in the Supplementary Material. 3.4.Identification of Chromophores in Absorbance MeasurementsAfter studying the similarity of the reflectance measurements between the HS cameras in Sec. 3.2, we now attempt to identify the presence of any of the chromophores mentioned in Sec. 2.6 within the measured absorbances. Table S3 in the Supplementary Material presents SAM values resulting from the comparison between mean absorbance spectral signatures acquired using both HS cameras and the absorption coefficient spectra of the chromophores discussed in Sec. 3.2. The wavelength of the peaks of interest, the respective analyzed spectral ranges, and the number of bands considered are detailed. Notably, the comparison is made with data obtained from the linescan cameras in the VNIR, using all 369 wavelengths measured by the camera, and NIR regions for the snapshot and linescan cameras as well. For this latter case, we use the 25 overlapping bands between the two cameras, indicated in Table S1 in the Supplementary Material. To obtain the SAM values, we extracted the peak wavelength of interest and the fifteen wavelengths on each side of it for the chromophore under analysis. This ensures that the extracted bands correspond to the wavelength of interest and its nearby range. We have chosen to search for a maximum of 15 wavelengths on each side of the peak, as this allows us to extract spectral signatures with sufficient information. We then find the closest corresponding wavelengths between the selected chromophore bands and those measurable by each camera. It is important to note that it will not always be possible to use all 31 bands from the chromophore to identify a peak. This is because the spectral resolution of the cameras is lower than that used to measure the chromophores; hence, there will generally be fewer camera bands than those measured for each chromophore. For example, if the chromophore peak has a very narrow bandwidth, such as that from Cyt. b in its reduced state at , there will not be many camera bands capable of measuring it. This also applies to searching for the closest snapshot camera wavelengths in the chromophores. In addition, we have included the mean spectral signatures of measured by the cameras together with the spectral signatures of the chromophore peaks analyzed in Figs. S3–S18 in the Supplementary Material, allowing the reader to visually check the measurements. 4.DiscussionThis work compares an HS snapshot camera with a linescan camera in the 659.95 to 951.42 nm range. Although similar studies have employed spectrometer measurements as a reference to address the comparison between HS cameras,23 using such an instrument was not feasible in this work because it would have required a sterilization process and placing the spectrometer close to an in vivo tissue during multiple surgical procedures. However, the effectiveness of the cameras was evaluated by conducting a comparison with the data obtained from a reference spectrometer over a standardized polymer reflectance. Pearson coefficients indicate a high correlation between the three systems and the given spectral response of the polymer provided by the manufacturer, with a value of around 96% for the linescan camera. Therefore, the linescan camera was considered to be the reference because it captures 15 times more spectral data and nearly four times more spatial data than the snapshot camera. Moreover, a similar linescan camera has already been used as an intraoperative tool for in vivo brain tumor classification with great results in the 400 to 1000 nm range.15 Visual analysis was made by comparing the mean spectral signature reflectance of both cameras in two scenarios, with and without data normalization. Furthermore, objective comparisons were made for both cases by computing four similarity metrics, SAM, GFC, and RMSE. Results have been illustrated in distributions to verify how they spread across the ten patients used for this study. We computed the IQR for each distribution to study the similarity of both cameras in the two previously described scenarios. In addition, we attempted to identify several chromophores in the absorbance measured with both cameras. The absorbances were obtained, for each patient, using the gathered reflectance as specified in Sec. 3.2. We have observed that the snapshot camera measurements are noisier compared with those from the linescan camera. Regardless of the laser emission at from the LiDAR of the acquisition system, which must be turned on to visualize a live video during surgery, the spectral signatures appear less smooth than those obtained with the linescan camera. This is evident from the mean reflectance spectral signature of the snapshot, which shows several peaks and valleys along the spectral range from 659.95 to 950.64 nm, which could be due to the noise introduced by the sensor. In fact, the Fabry-Pérot sensor of the snapshot with 25 filters has been characterized in detail by Hahn et al.,22 concluding that the correction matrix provided by the manufacturer is insufficient to reconstruct the spectrum without introducing large measurement errors. Furthermore, the irregularities found in the sensor are present across the whole sensor, hence, in the entire spectral range of the camera. Although Hahn et al. propose to create an individual matrix after characterizing the camera, a dedicated optical system is required which was not available for this study. To mitigate this issue, previous works by Muhle et al. using the same snapshot camera model averaged 10 captures for organ transplantation purposes.23 However, we could not adopt the same method for imaging in vivo tissue because multiple captures would yield varying spatial measurements due to the brain movement caused by the heartbeat. Despite the noise introduced in the snapshot camera measurements, the trends of the spectral signatures are similar to those obtained with the linescan camera. In general, both cameras seem to agree that the pathological tissue provides less reflectance than healthy tissue when data is unnormalized. Such behavior can be seen in the two first columns of Fig. 5. The errors in the snapshot camera measurements are more pronounced in the spectral signatures in Fig. 5 once the data is normalized. This was expected since a min-max normalization was applied to each spectral signature individually, causing errors in measurements to increase since data is scaled from the 0 to 1 range. This behavior is illustrated within the spectral signatures in the two right columns of Fig. 5, where the continuous lines have a noisier trend than the dashed spectral signatures coming from the linescan camera. In consequence, normalized data from both cameras appear to be less similar to each other than when reflectance data is unnormalized. This fact is confirmed after analyzing the distributions with the SAM, GFC, and RMSE metrics, which are used to address the comparison of the cameras. Generally, distributions using normalized data have higher IQR values since they are more spread than those distributions with unnormalized data. For example, IQR values from unnormalized data for the healthy tissue are 0.023, 0.001, and 0.061 for the SAM, GFC, and RMSE metrics, respectively, whereas the results from normalized data are 0.119, 0.006, and 0.083 for the same metrics. Expected differences in reflectance intensity, as seen in Fig. 5, could also arise due to variations in lighting angle and working distance. These variations are traditionally corrected using a white reference image [ from Eq. (1)], obtained from a flat calibration board.15,18 However, the 3D structure of the brain and the inherent organ texture variations can cause discrepancies in spectral characteristics due to deviations in illumination and working distance across the surface.36 Although unnormalized data make spectral measurements very similar for both cameras, as presented in Figs. 7(e) and 7(f), the spectra is very steady throughout the spectral range under analysis. Hence, the presence of absorption peaks, such as the one that Hb has at , might be hardly noticeable with unnormalized data. Normalized data seem to illustrate better the presence of Hb or blood with local minimums at , as presented in Figs. 8(e) and 8(f). This behavior is found when analyzing the SAM values in Table S3 in the Supplementary Material to identify such absorption peaks in the absorbance of the cameras, where normalized data has 0.03 less SAM than that obtained with unnormalized data, regardless of the tissue. However, the SAM values obtained with the snapshot cameras are higher with normalized data than with unnormalized data. This might be due to the noise introduced by the sensor, as already explained previously. Moreover, observations show how the contribution of Hb is slightly higher in pathological tissue than in healthy tissue at , which may be related to increased perfusion of tumor tissue, especially in high-grade tumors, or may even be related to lack of oxygen to brain tissue or tumor hypoxia due to abnormalities in tumor vessel structure.37 Such behavior was found in other studies using HSI for in vivo human brain29 and might also indicate hypoxia from glioma cells.38 The analysis conducted in Sec. 3.4 aimed to identify any chromophore absorption peaks in the absorbance measurements of the cameras. The results indicate overoptimistic SAM values for most peaks, which do not seem to correlate with the spectra in Figs. S3–S18 in the Supplementary Material. However, the absorbance measurements of the linescan camera might indicate the presence of four peaks, as shown by the SAM values in Table S3 in the Supplementary Material and their corresponding spectra in Figs. S4, S13, S14, and S18 in the Supplementary Material. These peaks correspond to an absorption peak of oxidized Cyt. b at , two peaks of at and , and the water absorption peak at . Regardless of the tissue under analysis and the data used, the values we obtained are for the peak at , for the peak at , for the peak at , and for the peak at . This statement makes sense because the first three peaks have the highest absorption coefficient values, which means they could potentially be measured. Although these peaks are the most absorbent and might help during tumor detection, further research is needed to evaluate if the snapshot camera employed can detect pathological tissue in the 659.95 to 950.64 nm range. 5.ConclusionsIn this study, we have compared and analyzed two different HS cameras because of their potential to be used for intraoperative brain tissue identification. Specifically, we have used images from ten in vivo human patients with different pathologies. Measurements show how the snapshot camera with less spectral and spatial resolution can capture a similar spectral behavior than that obtained with the linescan camera. Although the linescan camera has almost three times more spectral resolution than the snapshot, it required 1 min and 40 s to scan a single in vivo human brain image. Hence, it is not suitable for real-time solutions compared with the snapshot camera that requires 100 ms to acquire the data. Furthermore, such camera has already been used in real-time solutions for in vivo human brain tumor classification,19 acquiring and processing the HS data at 14 frames per second. Moreover, objective comparisons were made in the shared spectral range of both cameras between 659.95 and 951.42 nm using four similarity metrics: SAM, GFC, and RMSE. Results with unnormalized data show high similarity between the reflectances captured with the cameras in the aforementioned spectral range for either healthy or pathological tissues. However, due to the noise introduced by the snapshot mosaic sensor,22 the similarity between cameras is reduced once data is normalized. For example, the SAM metric shows that there is reduced dispersion and high similarity between cameras for pathological samples, with an IQR value of 9.68% for normalized data, compared with an IQR value of 2.38% for unnormalized data. This behavior is consistent also for both GFC and RMSE, irrespective of the type of tissue under inspection. Differences in similarity between cameras may be attributed to errors that arise from the independent normalization applied to each spectral signature to minimum and maximum values between 0 and 1. Even though noiseless measurements from the snapshot camera could be obtained by averaging multiple images,22,23 such procedure is not feasible during in vivo brain surgeries due to the heart beating, which pumps blood to the brain causing it to move. Furthermore, we studied the ability of both cameras to identify several tissue chromophores in their measurements. In particular, we attempted to identify Hb, , fat, water, and several cytochromes by converting the measured reflectance of the cameras to absorbance, as specified in Sec. 3.2. Such task is done by trying to identify relevant peaks of the previous chromophores. For that, we apply the SAM metric as it is the only one that considers the shape of the spectra. Furthermore, the identification of chromophores was also conducted through a subjective inspection of the absorbance spectra measured with the cameras in comparison to the absorption coefficients of the chromophores. Out of the 21 peaks analyzed, only five could potentially be identified by the snapshot camera as most of them are present in the visible spectrum, specifically from the 400 to 625 nm spectra. However, the snapshot camera encountered difficulties in identifying any of those five peaks, which are from oxidized Cyt. c at , Hb at , and three fat peaks at , , and . Such difficulties could be due to their low absorption coefficient values compared with those in the visible spectrum and the low spectral resolution of the camera. Nonetheless, we observed that the linescan camera detected four absorption peaks, which corresponded to three different chromophores present in its absorbance measurements. These peaks correspond to the oxidized Cyt. b peak at , to two peaks of at and , and to a water peak at . Regardless of the tissue and data used, the obtained SAM values between the absorbance of the camera and the absorption coefficient of the chromophores were approximately 0.235, 0.210, 0.110, and 0.100, respectively. These values suggest high similarities between the spectra and the possible presence of the mentioned chromophores in the absorbance measurements. All things considered, the snapshot camera can provide reasonable measurements to describe brain tissue behavior when compared with the typical linescan cameras used for brain tumor detection.15,29,39,40 Likewise, the snapshot camera offers great opportunities to provide real-time solutions as employed in other studies.19 However, combining multiple snapshot cameras to increase the spectral range can lead to a better reconstruction of the spectral behavior of biological tissues as shown in other works.23 Therefore, this study shows the potential use of snapshot cameras for in vivo brain tissue identification. Moreover, similar spectral measurements from both cameras were obtained, suggesting the combination of data from both cameras to train classification models and enhance in vivo brain tumor classification. DisclosuresThe authors have no conflicts of interest, and the funders played no part in the study design, data collection, analysis, manuscript writing, or the decision to publish the results. Code and Data AvailabilityAll the in vivo hyperspectral human brain data used in this study are from the Slim Brain database, which is available at https://slimbrain.citsem.upm.es/. Note that access must be granted, under reasonable request, before downloading the data. AcknowledgmentsThe authors would like to thank neurosurgeons and staff of the Hospital Universitario 12 de Octubre. This work was supported by the TALENT-HIPSTER (High Performance Systems and Technologies for E-health and Fish Farming) (PID2020-116417RB-C41) research project, funded by the Spanish Ministry of Science and Innovation, and by the European project STRATUM (3D Decision Support Tool for Brain Tumor Surgery) under Grant No. 101137416. References, “Las cifras del cáncer en españa 2022,”
https://seom.org/images/LAS_CIFRAS_DEL_CANCER_EN_ESPANA_2022.pdf
(19 October 2023).
Google Scholar
N. Sanai et al.,
“An extent of resection threshold for newly diagnosed glioblastomas,”
J. Neurosurg., 115
(1), 3
–8 https://doi.org/10.3171/2011.2.JNS10998 JONSAC 0022-3085
(2011).
Google Scholar
J. P. Thakkar et al.,
“Epidemiologic and molecular prognostic review of glioblastoma,”
Cancer Epidemiol. Biomarkers Prevent., 23
(10), 1985
–1996 https://doi.org/10.1158/1055-9965.EPI-14-0275
(2014).
Google Scholar
M. I. Miga et al.,
“Clinical evaluation of a model-updated image-guidance approach to brain shift compensation: experience in 16 cases,”
Int. J. Comput. Assist. Radiol. Surg., 11
(8), 1467
–1474 https://doi.org/10.1007/s11548-015-1295-x
(2016).
Google Scholar
D. A. Orringer, A. Golby and F. Jolesz,
“Neuronavigation in the surgical management of brain tumors: current and future trends,”
Expert Rev. Med. Devices, 9
(5), 491
–500 https://doi.org/10.1586/erd.12.42 1743-4440
(2012).
Google Scholar
B. Albayrak, A. Samdani and P. Black,
“Intra-operative magnetic resonance imaging in neurosurgery,”
Acta Neurochir., 146
(6), 543
–557 https://doi.org/10.1007/s00701-004-0229-0 ACNUA5 0001-6268
(2004).
Google Scholar
R. U. Gandhe and C. P. Bhave,
“Intraoperative magnetic resonance imaging for neurosurgery: an anaesthesiologist’s challenge,”
Indian J. Anaesth., 62
(6), 411 https://doi.org/10.4103/ija.IJA_29_18
(2018).
Google Scholar
R. Sastry et al.,
“Applications of ultrasound in the resection of brain tumors,”
J. Neuroimaging, 27
(1), 5
–15 https://doi.org/10.1111/jon.12382 JNERET 1051-2284
(2017).
Google Scholar
T. Selbekk et al.,
“Ultrasound imaging in neurosurgery: approaches to minimize surgically induced image artefacts for improved resection control,”
Acta Neurochir., 155
(6), 973
–980 https://doi.org/10.1007/s00701-013-1647-7 ACNUA5 0001-6268
(2013).
Google Scholar
N. Ferraro et al.,
“The role of 5-aminolevulinic acid in brain tumor surgery: a systematic review,”
Neurosurg. Rev., 39
(4), 545
–555 https://doi.org/10.1007/s10143-015-0695-2 NSREDV 1437-2320
(2016).
Google Scholar
B. Kiesel et al.,
“5-ALA in suspected low-grade gliomas: current role, limitations, and new approaches,”
Front. Oncol., 11 699301 https://doi.org/10.3389/fonc.2021.699301
(2021).
Google Scholar
G. Lu and B. Fei,
“Medical hyperspectral imaging: a review,”
J. Biomed. Opt., 19
(1), 010901 https://doi.org/10.1117/1.JBO.19.1.010901 JBOPFO 1083-3668
(2014).
Google Scholar
Y. W. Wang et al.,
“Multiplexed optical imaging of tumor-directed nanoparticles: a review of imaging systems and approaches,”
Nanotheranostics, 1 369
–388 https://doi.org/10.7150/ntno.21136
(2017).
Google Scholar
J. Pichette, W. Charle and A. Lambrechts,
“Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan,”
Proc. SPIE, 10110 1011014 https://doi.org/10.1117/12.2253614 PSISDG 0277-786X
(2017).
Google Scholar
H. Fabelo et al.,
“An intraoperative visualization system using hyperspectral imaging to aid in brain tumor delineation,”
Sensors, 18
(2), 430 https://doi.org/10.3390/s18020430 SNSRES 0746-9462
(2018).
Google Scholar
R. Vandebriel et al.,
“Integrating hyperspectral imaging in an existing intra-operative environment for detection of intrinsic brain tumors,”
Proc. SPIE, 12368 123680D https://doi.org/10.1117/12.2647690 PSISDG 0277-786X
(2023).
Google Scholar
S. Puustinen et al.,
“Towards clinical hyperspectral imaging (HSI) standards: initial design for a microneurosurgical HSI database,”
in IEEE 35th Int. Symp. Comput.-Based Med. Syst. (CBMS),
394
–399
(2022). https://doi.org/10.1109/CBMS55023.2022.00077 Google Scholar
G. Urbanos et al.,
“Supervised machine learning methods and hyperspectral imaging techniques jointly applied for brain cancer classification,”
Sensors, 21
(11), 3827 https://doi.org/10.3390/s21113827 SNSRES 0746-9462
(2021).
Google Scholar
J. Sancho et al.,
“SLIMBRAIN: augmented reality real-time acquisition and processing system for hyperspectral classification mapping with depth information for in-vivo surgical procedures,”
J. Syst. Archit., 140 102893 https://doi.org/10.1016/j.sysarc.2023.102893
(2023).
Google Scholar
A. Martín-Pérez et al.,
“SLIM brain database: a multimodal image database of in-vivo human brains for tumour detection,”
(2023). https://www.researchsquare.com/article/rs-3629358/v1 Google Scholar
J. M. Bioucas-Dias and J. M. P. Nascimento,
“Hyperspectral subspace identification,”
IEEE Trans. Geosci. Remote Sens., 46
(8), 2435
–2445 https://doi.org/10.1109/TGRS.2008.918089 IGRSD2 0196-2892
(2008).
Google Scholar
R. Hahn et al.,
“Detailed characterization of a mosaic based hyperspectral snapshot imager,”
Opt. Eng., 59
(12), 125102 https://doi.org/10.1117/1.OE.59.12.125102
(2020).
Google Scholar
R. Mühle et al.,
“Comparison of different spectral cameras for image-guided organ transplantation,”
J. Biomed. Opt., 26
(7), 076007 https://doi.org/10.1117/1.JBO.26.7.076007 JBOPFO 1083-3668
(2021).
Google Scholar
M. Agarla et al.,
“An analysis of spectral similarity measures,”
in Color and Imaging Conf.,
300
–305
(2021). Google Scholar
J. Romero, A. García-Beltrán and J. Hernández-Andrés,
“Linear bases for representation of natural and artificial illuminants,”
J. Opt. Soc. Am. A, 14 1007
–1014 https://doi.org/10.1364/JOSAA.14.001007 JOAOD6 0740-3232
(1997).
Google Scholar
C.-I. Chang,
“Spectral information divergence for hyperspectral image analysis,”
in IEEE 1999 Int. Geosci. and Remote Sens. Symp., IGARSS’99 (Cat. No.99CH36293),
509
–511
(1999). https://doi.org/10.1109/IGARSS.1999.773549 Google Scholar
S. Ortega et al.,
“Detecting brain tumor in pathological slides using hyperspectral imaging,”
Biomed. Opt. Express, 9 818
–831 https://doi.org/10.1364/BOE.9.000818 BOEICL 2156-7085
(2018).
Google Scholar
R. Leon et al.,
“Hyperspectral imaging for in-vivo/ex-vivo tissue analysis of human brain cancer,”
Proc. SPIE, 12034 1203429 https://doi.org/10.1117/12.2611420 PSISDG 0277-786X
(2022).
Google Scholar
R. Leon et al.,
“Hyperspectral imaging benchmark based on machine learning for intraoperative brain tumour detection,”
NPJ Precis. Oncol., 7 119 https://doi.org/10.1038/s41698-023-00475-9
(2023).
Google Scholar
D. F. Swinehart,
“The Beer-Lambert law,”
J. Chem. Educ., 39
(7), 333 https://doi.org/10.1021/ed039p333 JCEDA8 0021-9584
(1962).
Google Scholar
T. Correia, A. Gibson and J. Hebden,
“Identification of the optimal wavelengths in optical topography using photon density measurement functions,”
Proc. SPIE, 7187 718718 https://doi.org/10.1117/12.809295 PSISDG 0277-786X
(2009).
Google Scholar
I. Tachtsidis and P. Pinti,
“UCL NIR spectra,”
https://github.com/multimodalspectroscopy/UCL-NIR-Spectra/
().
Google Scholar
S. Prahl,
“Optical absorption of hemoglobin,”
http://omlc.ogi.edu/spectra/hemoglobin/
(1999).
Google Scholar
R. van Veen et al.,
“Determination of VIS-NIR absorption coefficients of mammalian fat, with time- and spatially resolved diffuse reflectance and transmission spectroscopy,”
in Biomed. Top. Meeting,
SF4
(2004). Google Scholar
M. Allen et al.,
“Raincloud plots: a multi-platform tool for robust data visualization [version 2; peer review: 2 approved],”
Wellcome Open Res., 4 63 https://doi.org/10.12688/wellcomeopenres.15191.2
(2021).
Google Scholar
A. M. de Ternero et al.,
“Real-time hyperspectral and depth fusion calibration method for improved reflectance measures on arbitrary complex surfaces,”
in 26th Euromicro Conf. Digital System Design (DSD),
507
–514
(2023). Google Scholar
J. Pacheco-Torres et al.,
“Imaging tumor hypoxia by magnetic resonance methods,”
NMR Biomed., 24
(1), 1
–16 https://doi.org/10.1002/nbm.1558 NMRBEF 0952-3480
(2011).
Google Scholar
E. Johansson et al.,
“CD44 interacts with HIF- to modulate the hypoxic phenotype of perinecrotic and perivascular glioma cells,”
Cell Rep., 20
(7), 1641
–1653 https://doi.org/10.1016/j.celrep.2017.07.049
(2017).
Google Scholar
H. Fabelo et al.,
“Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations,”
PLoS One, 13
(3), e0193721 https://doi.org/10.1371/journal.pone.0193721 POLNCL 1932-6203
(2018).
Google Scholar
H. Fabelo et al.,
“Deep learning-based framework for in vivo identification of glioblastoma tumor using hyperspectral images of human brain,”
Sensors, 19
(4), 920 https://doi.org/10.3390/s19040920 SNSRES 0746-9462
(2019).
Google Scholar
BiographyAlberto Martín-Pérez is a teaching assistant at the Department of Audivisual Engineering and Communication in the School of Telecommunications Systems and Engineering of the Technical University of Madrid (UPM). Presently, he is pursuing a PhD at UPM in the CITSEM research center around the utilization of Machine Learning algorithms for the classification of in-vivo human brain tumors through hyperspectral imaging. Furthermore, he aims to enhance classification methodologies through the application of spatial frequency domain imaging. Alejandro Martínez de Ternero received his master’s degree in Internet of Things (IoT) and his bachelor’s degree in telematics from Universidad Politécnica de Madrid (UPM) in 2022 and 2021, respectively. He is currently a PhD student at the Research Center on Software Technologies and Multimedia Systems for Sustainability (CITSEM), UPM. His research is focused on modeling the light-tissue interactions through Monte Carlo and radiative transfer modeling for optical property estimation using the information captured by hyperspectral cameras and spectral unmixing techniques to expedite the classification of tissue types. Alfonso Lagares is a professor of neurosurgery at Universidad Complutense de Madrid and the head of neurosurgery at Hospital 12 de Octubre. He earned his PhD in neuroscience in 2004 from Universidad Autónoma de Madrid, receiving the Doctorate Extraordinary Prize. He coordinates a research group at Institute imas12. His research focuses on prognostic models in brain injury, new neurosurgical tools, radiological techniques for white matter assessment, biomarkers for head injury diagnosis and prognosis, and new tools for tumor resection. Eduardo Juárez received his PhD from École Polytechnique Fédéral de Lausanne (EPFL) in 2003. From 1994 to 1997, he was a researcher at UPM and a visiting researcher at ENST, Brest, and the University of Pennsylvania. He then was employed at EPFL’s Integrated Systems Laboratory and as a senior systems engineer at Transwitch Corp. In 2004, he joined Universidad Politecnica de Madrid (UPM) as a postdoc, becoming an associate professor in 2007. His research focuses on hyperspectral imaging for health, real-time depth estimation, and high-performance computing. César Sanz received his PhD in telecommunication engineering from Universidad Politécnica de Madrid (UPM) in 1998. Since 1985, he has been a faculty member at the UPM, where he is currently a full professor. He directed ETSIS de Telecomunicación (2008 to 2017) and led the Electronic and Microelectronic Design Group (GDEM) since 1996, involved in R&D projects with private companies and public institutions. Since 2021, he has been the director of the CITSEM research center at UPM. His research interests include electronic design for video coding and hyperspectral imaging. |
Cameras
Line scan image sensors
Tissues
Brain
Chromophores
Reflectivity
In vivo imaging