We present an extension of our automatic anomaly detection approach for the quality inspection of industrially manufactured parts. The sample under test is imaged from different perspectives simultaneously while it is in free fall to reduce inspection time and minimize part handling. Despite using a diffuse reflecting hollow sphere to achieve the best possible conditions for all camera perspectives, small artifacts from reflections on highly reflecting test specimens and drop shadows appear in the images. The presence of these artifacts leads to the appearance of type I errors. To address this issue, the state-of-the-art for anomaly detection PatchCore1 is modified to handle multiple perspectives at first. Second, a weighting step is added to the image evaluation pipeline. For this, the pose of the test sample is estimated, which is subsequently used to calculate a weight matrix per image. The weights correspond to the local viewing angle of the camera on the sample’s surface because the artifacts occur mainly at steep viewing angles. In addition, two datasets are created to evaluate the proposed approach containing sample data with single and multiple perspectives. The results show that the developed pipeline outperforms PatchCore and the original free-fall inspection setup algorithm. It reaches 95.9% AUROC for Object one and 85.7% AUROC for Object two on validation of multi-perspective datasets. Moreover, combining the proposed approach and the free-fall inspection algorithm improves the results for Object two, achieving 98% AUROC. The conducted experiments allow us to conclude that this approach has the potential to further increase robustness toward various anomalies and artifacts.
Plenoptic cameras capture the spatial and angular information of a scene. The use of plenoptic cameras in areas such as research, microscopy, industry and consumer markets has steadily increased over the past two decades. When designing a plenoptic camera a decision must always be made between spatial and angular resolution. Many factors such as the size and number of microlenses in a microlens array or the relative position of the microlens array and sensor to the main lens play a role. Here we examine the two most common designs of plenoptic cameras. The plenoptic cameras 1.0, also called the unfocused plenoptic camera, and the plenoptic cameras 2.0, the focused plenoptic camera. We derive the mathematical equations that describe the connection between spatial and angular resolution. Supported by experimental results we show the relationship between the equations and a real object with different object distances taken from the plenoptic camera 1.0 and 2.0. These analyzes make it easier for researchers and engineers to choose the right camera design for a particular application. The user only has to determine beforehand which depth-of-field or which spatial resolution is needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.