Object detection in aerial images plays an important role for a wide range of applications. Although many efforts have been done in the last decade, it is still an active and challenging problem because of the highly complex backgrounds and the large variations in the visual appearance of objects caused by viewpoint variation, occlusion, illumination, etc. Recently, many object detectors based on deep learning demonstrate the great advantages for significantly improving the detection performance in aerial images. However, the most accuracy neural networks usually have hundreds of layers and thousands of channels, thus requiring huge computation and memory consumption. Besides, the state-of-the-art object detectors are usually fined-tuned from the models pretrained on classification dataset ImageNet, which limits the modification of network architecture and also leads to learning bias because of the different domains. In this paper we trained a lightweight convolutional neural network from scratch to perform object detection in aerial images. When designing the lightweight network, Concatenated Rectified Linear Units (CReLU) and depthwise separable convolution operation were employed to reduce the computation cost and model size. When training the lightweight network from scratch, we employ Group Normalization (GN) in each convolution layer, which makes smoother optimization landscape and has more stable gradients. A serial of ablation experiments is conducted on the recently published large-scale Dataset for Object detection in Aerial images (DOTA), and the results show that the proposed object detection methods with lightweight network trained from scratch achieves competitive performance but has smaller model size and lower computation cost.
Car detection from unmanned aerial vehicle (UAV) images has become an important research field. However, robust and efficient car detection is still a challenging problem because of the cars’ appearance variations and complicated background. We present an online cascaded boosting framework with histogram of orient gradient (HOG) features for car detection from UAV images. First, the HOG of the whole sliding window is computed to find the primary gradient direction that is used to estimate the car’s orientation. The sliding window is then rotated according to the estimated car’s orientation, and the HOG features in the rotated window are efficiently computed using the proposed four kinds of integral histograms. Second, to improve the performance of the weak classifiers, a new distance metric is employed instead of the Euclidean distance. Third, we propose an efficient online cascaded boosting for car detection by combining online boosting with soft cascade. Additionally, for the problem of imbalanced training samples, more positive samples are extracted in the rotated images, and for postprocessing, a confidence map is obtained to combine multiple detections and eliminate isolated false negatives. A set of experiments on real images shows the applicability and high efficiency of the proposed car detection method.
This paper designs a multiple reflectors based autocollimator, and proposes a direct linear solution for three-dimensional (3D) angle measurement with the observation vectors of the reflected lights from the reflectors. In the measuring apparatus, the multiple reflectors is fixed with the object to be measured and the reflected lights are received by a CCD camera, then the light spots in the image are extracted to obtain the vectors of the reflected lights in space. Any rotation of the object will induce a change in the observation vectors of the reflected lights, which is used to solve the rotation matrix of the object by finding a linear solution of Wahba problem with the quaternion method, and then the 3D angle is obtained by decomposing the rotation matrix. This measuring apparatus can be implemented easily as the light path is simple, and the computation of 3D angle with observation vectors is efficient as there is no need to iterate. The proposed 3D angle measurement method is verified by a set of simulation experiments.
The automatic detection of visually salient information from abundant video imagery is crucial, as it plays an important role in surveillance and reconnaissance tasks for Unmanned Aerial Vehicle (UAV). A real-time approach for the detection of salient objects on road, e.g. stationary and moving vehicle or people, is proposed, which is based on region segmentation and saliency detection within related domains. Generally, the traditional method specifically depends upon additional scene information and auxiliary thermal or IR sensing for secondary confirmation. However, this proposed approach can detect the interesting objects directly from video imagery captured by optical camera fixed on the small level UAV platform. To validate this proposed salient object detection approach, the 25 Hz video data from our low speed small UAV are tested. The results have demonstrated the proposed approach performs excellently in isolated rural environments.
Camera calibration is one of the most basic and important processes in optical measuring field. Generally, the objective
of camera calibration is to estimate the internal and external parameters of object cameras, while the orientation error of
optical axis is not included yet. Orientation error of optical axis is a important factor, which seriously affects measuring
precision in high-precision measurement field, especially for those distant aerospace measurement in which object
distance is much longer than focal length, that lead to magnifying the orientation errors to thousands times. In order to
eliminate the influence of orientation error of camera optical axis, the imaging model of camera is analysed and
established in this paper, and the calibration method is also introduced: Firstly, we analyse the reasons that cause optical
axis error and its influence. Then, we find the model of optical axis orientation error and imaging model of camera
basing on it’s practical physical meaning. Furthermore, we derive the bundle adjustment algorithm which could compute
the internal and external camera parameters and absolute orientation of camera optical axis simultaneously at high
precision. In numeric simulation, we solve the camera parameters by using bundle adjustment optimization algorithm,
then we correct the image points by calibration results according to the model of optical axis error, and the simulation
result shows that our calibration model is reliable, effective and precise.
The high portability of small Unmanned Aircraft Vehicles (UAVs) makes them play an important
role in surveillance and reconnaissance tasks, so the military and civilian desires for UAVs are
constantly growing. Recently, we have developed a real-time video exploitation system for our small
UAV which is mainly used in forest patrol tasks. Our system consists of six key models, including
image contrast enhancement, video stabilization, mosaicing, salient target indication, moving target
indication, and display of the footprint and flight path on map. Extensive testing on the system has
been implemented and the result shows our system performed well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.