High Dynamic Range Imaging Based on Attenuation Microarray Mask has broad application prospects due to its good real-time performance and small size. But at the current level of craftsmanship, it is hard to fabricate a microattenuation array mask whose attenuation rate is adjustable. This leads to the fact that the imaging dynamic range cannot adapt to changes in scene brightness in most cases. To this end, this paper proposes a novel imaging system whose dynamic range can adaptively change according to the brightness of the scene. The core components are the micro polarization array mask mounted on the CMOS surface and the on-sensor rotatable linear polarizer in front of the lens. By controlling the rotation angle of the polarizer placed before the lens, the CMOS pixel exposure can be precisely controlled. Therefore, the imaging system dynamic range can be adjusted adaptively according to the scene brightness. Through horizontal comparison with Sony’s multi-quadrant polarization chip, we determined the optimal parameters of the multi-quadrant micro-polarization array mask for extending the dynamic range. The experimental results show that the imaging performance remains consistently good even when the dynamic range of the photographed scene is large, and the dynamic range of the device can be adapted to the dynamic range of the shooting scene, so that the image after shooting and processing can always show sufficient details. By rotating the polarizer in front of the lens to a specific angle, the high dynamic imaging of the scene can be significantly improved.
The temperature and energy balance of the earth are impacted by the effect of clouds on radiation transmission through the atmosphere. Therefore, the three-dimensional (3-D) reconstruction of clouds has received extensive attention for the estimation of solar energy, radiation transmission, and climate forecasting. The current methods of three-dimensional cloud reconstruction mainly rely on satellites or aircraft with detection limitations and high costs. In contrast, ground-based platforms are becoming more desirable due to simplicity and persistence. However, ground-based platforms have blooming effects caused by the sun. In this paper, we proposed and investigated a novel approach to reconstruct a three-dimensional cloud structure using all-sky polarization imaging data obtained from a ground-based polarizing imaging platform. In the core algorithm, we used the “normalized polarization degree difference index” (NPDDI) to achieve cloud optical thickness (COT) recognition that identified the difference in the degree of polarization (DoP) between the cloudy sky and cl ear sky radiation. Then, the three-dimensional structure of clouds based on different DoPs could be obtained by scanning the COTs accordingly. Our experimental results showed that benefiting from the inherent advantages of polarization, this method showed a good performance of precision and efficiency and displayed potential applications in complex meteorological conditions (fog, haze, etc.). The reconstruction results show different surfaces corresponding to different cloud patterns. Meanwhile, the dynamic range of the system was improved by merging the overexposure and underexposure frames into a complete high dynamic result.
The existing target tracking methods are susceptible to the interference from complex backgrounds and have poor robustness. The maturity of real-time polarization imaging technology has expanded the dimensions of targets from the light intensity and spectral to polarization state, which can enhance the detection capability for concealed, camouflaged, and special material targets. The existing target tracking algorithms were used to verify the feasibility of the method for detection and tracking of UAV (Unmanned Aerial Vehicle) based on polarization imaging in three typical backgrounds (sky, building, and jungle). Experiments showed that under the background of the sky and buildings, the polarization image could track the UAV robustly and quickly. The tracking speed was about 2-3 times higher than that of ordinary grayscale images. The success rate was about doubled when the effect of ordinary grayscale images did not work well. Under the background of complex jungle, the effect of images of DoLP (Degree of Linear Polarization) was better than that of images of AoP (Angle of Polarization), but their robustness was weaker than ordinary grayscale images.
Polarization imaging plays an important role in various fields, especially for skylight navigation and target identification, whose imaging system is always required to be designed with high resolution, broad band, and single-lens structure. This paper describe such a imaging system based on light field 2.0 camera structure, which can calculate the polarization state and depth distance from reference plane for every objet point within a single shot. This structure, including a modified main lens, a multi-quadrants Polaroid, a honeycomb-liked micro lens array, and a high resolution CCD, is equal to an “eyes array”, with 3 or more polarization imaging “glasses” in front of each “eye”. Therefore, depth can be calculated by matching the relative offset of corresponding patch on neighboring “eyes”, while polarization state by its relative intensity difference, and their resolution will be approximately equal to each other. An application on navigation under clear sky shows that this method has a high accuracy and strong robustness.
Lucky imaging technology is widely applied in astronomical imaging system because of its low cost and good performance. However, the probability of capturing an excellent lucky image is low, especially for a large aperture telescope. Thus a method of adaptive image partition is proposed in this paper to extract any lucky part of an image so as to increase the probability of obtaining the lucky image. This system is comprised of a telescope and three cameras running synchronously at the image plane, the front defocus plane and the back defocus plane respectively. Two out-focus cameras have the same defocus distance. Our algorithm of selecting each lucky part of the space object picture, which is influenced little by atmosphere turbulence, is based on the difference between pictures obtained by the front and the back defocus cameras. Then image stitching is used to obtain the entire sharp picture.
In the process of 3D reconstruction of the target scene with the moving vehicle-borne single-line scanner, a method of
monocular visual positioning is used to locate the exact position of the moving scanner. Attitude sensor, camera and the
laser scanner are located on the same platform which is installed on the vehicle. Data can be collected synchronously.
We obtained a top view according to the observation of camera's attitude by IPM. A series of planform are acquired at
different moment. After dealing with these pictures with operations of extraction Speeded Up Robust Features (SURF),
matching characteristic points and RANSAC algorism, we calculated the transformation matrix between two adjacent
vertical views. The position and direction of the scanner at this moment can be calculated. On this basis, depending on
the sensors' synchronization control algorithm, the original point clouds data got by laser scanner can be registered as a
reference to the traveling track of the moving scanner, and then the 3D reconstruction of the target scene is established.
Experiment result shows that this method is easy to operate, time-efficient, low cost, and the accuracy of the method in
reconstructing 3D scene is demonstrated.
KEYWORDS: Cameras, Visualization, Roads, Sensors, Ranging, Imaging systems, Lab on a chip, Navigation systems, Data modeling, Global Positioning System
The monocular vision odometry simplifies the hardware and the software as opposed to the stereo vision odometry, but it
also has defect. When the vehicle is in motion, the camera's attitude changes inevitably, what lead that the method's
performance degrades. To solve this problem, we proposed a monocular visual odometry based on the inverse
perspective mapping (IPM). Attitude of the camera is monitored in real time by the attitude sensor when the vehicle is
moving. Then the images of road surface photographed by camera became top view by using the IPM algorithm, after
that, the characters of images can be calculated by the Speeded Up Robust Features (SURF) algorithm. By the random
sample consensus (RANSAC) algorithm, the amounts of translation and rotation between two adjacent images can be
concluded. Accordingly, the movement distance and the course of the vehicle can be worked out. In order to test the
ranging accuracy of the method, both static and dynamic experiments were implemented. Static experiment showed that
the average accuracy of ranging of this method achieved 1.6%. Dynamic experiment showed that the ranging accuracy
achieved 6%, and the heading measurement error is less than 1.3°. Therefore, the method proposed in this paper is easy
to operate, time-efficient, low cost, and the accuracy of the method in ranging and heading measurement are demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.