As a functional imaging technique, photoacoustic imaging has the ability to produce high contrast and high resolution image. However, photoacoustic microscopy shows the disadvantage of limited depth of field, due to the strong focusing process. Only some part of the whole image is clear while some other information might be lost or misleading. To achieve a large volumetric and high resolution photoacoustic, we propose a depth-of-field expansion motivated multi-focus photoacoustic microscopy image fusion algorithm which fuses the information from imaging of different focal positions into single one to obtain all-in-focus image. To achieve this goal, the volumes obtained at different focus positions are sliced into 2D and the 2D image fusion based on cross bilateral filter is performed for correspond slices. Finally, 3D reconstruction can be performed on these fused slices. Experiment results show that our method can produce a 3D fusion result which maintains useful photoacoustic signal information for further analysis and visualization.
Photoacoustic microscopy suffers from limited depth-of-field due to the strong focus of laser beam, which implies that only some part of the imaging result is clear. Such shortage limitsthe further application of this powerful imaging technique, which is the problem we hope to address. In this work, we consider to solve the target problem through information fusion method. By fusing multi-focus 3D photoacoustic images, a large volumetric and high resolution photoacoustic microscopy can be obtained. However, the task of fusing photoacoustic signal is different from general 2-D multi-focus image fusion problem. The core challenge for our work is the fusion of different 3D photoacoustic volumes. We simplify the task as 2D problem, which is achieved by slicing the 3D data to 2D and reconstruction of the 2D slices. We propose a 3D fusion method which involves proper data preprocessing, slicing of the 3D data, fusion of slices and 3D reconstruction. Experiment results verify that the fused data shows larger depth of field and contains more useful information, which supports our thought of extending depth-of-field through information fusion method. To further demonstrate the superior of deep-learning (DL) method, a few non-DL 2D algorithms are selected for a comparison study based on objective assessments to show the generalization ability of convolutional neural network based image fusion algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.