Instrumental aberrations strongly limit high-contrast imaging of exoplanets, especially when they produce quasistatic speckles in the science images. With the help of recent advances in deep learning, we have developed in previous works an approach that applies convolutional neural networks (CNN) to estimate pupil-plane phase aberrations from point spread functions (PSF). In this work we take a step further by incorporating into the deep learning architecture the physical simulation of the optical propagation occurring inside the instrument. This is achieved with an autoencoder architecture, which uses a differentiable optical simulator as the decoder. Because this unsupervised learning approach reconstructs the PSFs, knowing the true phase is not needed to train the models, making it particularly promising for on-sky applications. We show that the performance of our method is almost identical to a standard CNN approach, and that the models are sufficiently stable in terms of training and robustness. We notably illustrate how we can benefit from the simulator-based autoencoder architecture by quickly fine-tuning the models on a single test image, achieving much better performance when the PSFs contain more noise and aberrations. These early results are very promising and future steps have been identified to apply the method on real data.
High-contrast imaging instruments are today primarily limited by non-common path aberrations appearing between the wavefront sensor of the adaptive optics system and the science camera. Early attempts at using artificial neural networks for focal-plane wavefront sensing showed some successful results but today's higher computational power and deep architectures promise increased performance, flexibility and robustness that have yet to be exploited. We implement two convolutional neural networks to estimate wavefront errors from simulated point-spread functions. We notably train mixture density models and show that they can assess the ambiguity on the phase sign by predicting each Zernike coefficient as a probability distribution. Our method is also applied with the Vector Vortex coronagraph (VVC), comparing the phase retrieval performance with classical imaging. Finally, preliminary results indicate that the VVC combined with polarized light can lift the sign ambiguity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.