Algorithms that interpret images to locate surface defects, such as cracks, play a key role in an automated inspection system. That is the reason the success of convolutional neural networks (CNNs) in image object detection persuaded researchers to apply deep CNNs for visual surface crack detection. Among various deep learning architectures, encoder decoder architectures with fully convolutional networks (FCNs) are powerful tools for automatically segmenting inspection images and detecting crack maps. In this study the U-Net architecture, as a particular FCN, is trained using the available concrete crack datasets. The trained network is then employed to detect crack maps in a sequence of images taken from a concrete beam-column specimen under a cyclic load test. To enhance performance of the crack segmentation, instead of treating each image in the sequence independently, the detection results of the next stages of the experiment are used to determine the crack map at the current stage. By leveraging the fact that cracks propagate sequentially, a data fusion technique is proposed that updates crack maps by considering the outcome of the next steps. To realize this method, reference points on images are utilized to estimate the deformation of the structural members. The deformation information is then used to project the previously detected crack maps onto the current image. This makes it possible to aggregate current and future detections and achieve higher accuracy. The framework laid out in this study provides tools to filter out false positives and recover missed detections.
Time-of-flight cameras are used for diverse applications ranging from human-machine interfaces and gaming to robotics and earth topography. This paper aims at evaluating the capability of the Mesa Imaging SR4000 and the Microsoft Kinect 2.0 time-of-flight cameras for accurately imaging the top surface of a concrete beam subjected to fatigue loading in laboratory conditions. Whereas previous work has demonstrated the success of such sensors for measuring the response at point locations, the aim here is to measure the entire beam surface in support of the overall objective of evaluating the effectiveness of concrete beam reinforcement with steel fibre reinforced polymer sheets. After applying corrections for lens distortions to the data and differencing images over time to remove systematic errors due to internal scattering, the periodic deflections experienced by the beam have been estimated for the entire top surface of the beam and at witness plates attached. The results have been assessed by comparison with measurements from highly-accurate laser displacement transducers. This study concludes that both the Microsoft Kinect 2.0 and the Mesa Imaging SR4000s are capable of sensing a moving surface with sub-millimeter accuracy once the image distortions have been modeled and removed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.