Electroencephalogram (EEG) is a set of time series each of which can be represented as a 2D image (spectrogram), so that EEG recording can be mapped to the C-dimensional image (where C denotes the number of channels in the image and equals to the number of electrodes in EEG montage). In this paper, a novel approach for automated feature extraction from spectrogram representation is proposed. The method involves the usage of autoencoder models based on 3-dimensional convolution layers and 2-dimensional deformable convolution layers. Features, extracted by autoencoders, can be used to classify patients with Major Depressive Disorder (MDD) from healthy controls based on resting-state EEG. The proposed approach outperforms baseline ML models trained on spectral features extracted manually.
In this paper* we discuss the concept of the Cross-Barcode (P,Q) introduced and studied in the recent work [1]. In particular, we describe the emergence of this concept from the combinatorics of matrices of the pairwise distances between the two data representations. We also illustrate the applications of the Cross-Barcode (P,Q) to the evaluation of disentanglement in data representations. Experiments are carried out with the dSprites dataset from computer vision.
Machine learning and computer vision methods are showing good performance in medical imagery analysis. Yet only a few applications are now in clinical use and one of the reasons for that is poor transferability of the models to data from different sources or acquisition domains. Development of new methods and algorithms for the transfer of training and adaptation of the domain in multi-modal medical imaging data is crucial for the development of accurate models and their use in clinics. In present work, we overview methods used to tackle the domain shift problem in machine learning and computer vision. The algorithms discussed in this survey include advanced data processing, model architecture enhancing and featured training, as well as predicting in domain invariant latent space. The application of the autoencoding neural networks and their domain-invariant variations are heavily discussed in a survey. We observe the latest methods applied to the magnetic resonance imaging (MRI) data analysis and conclude on their performance as well as propose directions for further research.
We study the effects of the additional input to deep multi-view stereo methods in the form of low-quality sensor depth. We modify two state-of-the-art deep multi-view stereo methods for using with the input depth. We show that the additional input depth may improve the quality of deep multi-view stereo.
ABIDE is the largest open-source autism spectrum disorder database with both fMRI data and full phenotype description. These data were extensively studied based on functional connectivity analysis as well as with deep learning on raw data, with top models accuracy close to 75% for separate scanning sites. Yet there is still a problem of models transferability between different scanning sites within ABIDE. In the current paper, we for the first time perform domain adaptation for brain pathology classification problem on raw neuroimaging data. We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.