The current method for rework inspection of previously defective surface areas on aircraft landing gear components involves manual inspection. Measuring tools such as micrometers and gauges are utilized to obtain positions and dimensions of reworks. This information is required for determining whether re-entry of the component into service is possible. Overall, the manual process is time-consuming, highly dependent on the skill and experience of the inspector, and prone to errors. This paper presents a novel approach to inspect reworks on aircraft landing gear components using a robotic inspection system based on white light interferometry (WLI). The proposed method is aimed at improving accuracy, repeatability as well as efficiency of rework inspection. The robotic system handles the WLI and positions it over the component, allowing for detailed 3D measurements of the surface and reworked area. Considering the typical measuring spot size of a WLI, the overall positioning accuracy of industrial robots is crucial. Measures to address this problem as well as general limitations and the potential of the system for the use case are discussed. An exemplary rework inspection validates the applicability and demonstrates the potential of this approach. Future research and optimizations are discussed which could lead to more widespread adoption of this technology and further advancements in the maintenance of aircraft landing gears.
Learning models from synthetic image data rendered from 3D models and applying them to real-world applications can reduce costs and improve performance when using deep learning for image processing in automated visual inspection tasks. However, sufficient generalization from synthetic to real-world data is challenging, because synthetic samples only approximate the inherent structure of real-world images and lack image properties present in real-world data, a phenomenon called domain gap. In this work, we propose to combine synthetic generation approaches with CycleGAN, a style transfer method based on Generative Adversarial Networks (GANs). CycleGAN learns the inherent structure from real-world samples and adapts the synthetic data accordingly. We investigate how synthetic data can be adapted for a use case of visual inspection of automotive cast iron parts and show that supervised deep object detectors trained on the adapted data can successfully generalize to real-world data and outperform object detectors trained on synthetic data alone. This demonstrates that generative domain adaptation helps to leverage synthetic data in deep learning-assisted inspection systems for automated visual inspection tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.