The stereo matching image processing is studied by the characteristic area properties, the geometrical restrictions and color comparisons. The characteristic areas are the 4 directional connectivity areas bound by the edge parts. The geometrical restrictions are the size difference ratio between the characteristic areas, matching point displacement ratio between the centers of the gravity of the characteristic areas, the size distortion ratio between the added stereo matching pattern union and the reference stereo matching pattern. The proper stereo matching conditions are the state of the maximum and sufficient large pixel number on the overlap area, the satisfaction of the condition of the average row difference ratio on the overlap area, the similar state of the average color difference on the overlap area on the proper combinations of the characteristic areas. The ability of the studied stereo matching is experimented with respect to the colorful soccer ball object. The result of the stereo matching experiments provide the proper correspondence between the centers of the gravity of the characteristic areas. The geometrical restrictions is effective to select the good combination between the characteristic areas of the stereo matching patterns.
We have studied the object silhouettes and surface direction through the stereo matching image processing to recognize the position, size and surface direction of the object. For this study we construct the pixel number change distribution of the HSI color component level, the binary component level image by the standard deviation threshold, the 4 directional pixels connectivity filter, the surface elements correspondence by the stereo matching and the projection rule relation. We note that the HSI color component level change tendency of the object image near the focus position is more stable than the HSI color component level change tendency of the object image over the unfocused range. We use the HSI color component level images near the fine focused position to extract the object silhouette. We extract the object silhouette properly. We find the surface direction of the object by the pixel numbers of the correspondence surface areas and the projection cosine rule after the stereo matching image processing by the characteristic areas and the synthesized colors. The epipolar geometry is used in this study because a pair of imager is arranged on the same epipolar plane. The surface direction detection results in the proper angle calculation. The construction of the object silhouettes and the surface direction detection of the object are realized.
We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.
We have developed the stereo matching image processing by selected finite length edge line matching on least square method to find the local distance information of the view. This method is based on a pair of the high pass wavelet images to find out the matching edge line. These high pass wavelet images are also used to choose the focused images by applying threshold operation on them. Each imager has the function for focusing, changing view angle and changing aperture by the servomotors and microcomputers. It is mounted on the gimbal unit to make the independent yaw and pitch movement. And a pair of imagers is mounted on the yaw gimbal to make the same yaw movement. The matching edge line for matching process is derived from making the 2-valued high pass image with correspondence for focused image, grouping high valued pixels in the 2 valued high pass image by 8 directional connectivity rule, thinning the grouped image by Hilditch thinning method, tracing the thinned line image for numbering pixels on the line continuously, calculating the line linearity by the least square method at each pixel point with adjacent finite number of pixel points, finding out the line segments to have linearity within the limited root mean square of the difference between the line by least square method and the thinned line segment, constructing the standard matching edge line by reducing the number of the pixels of matching edge line for the tolerance of matching due to the deformation between a pair of images. The selected standard matching edge line is evaluated by autocorrelation on the standard thinned line image to check the existence of similar line segments. Under the information of autocorrelation the edge line matching is evaluated by moving the pixel point through paired thinned line image by calculating the root mean square of the difference between them.
We have developed the geometrical stereo matching image guidance for ground vehicle on focused image pixel
grouping and stacked images statistical operation. The two imagers are mounted on the 5 degrees of freedom gimbal unit.
The gimbal unit gives each imager the independent yaw and pitch movement, and makes the same rigid yaw rotation on
the two imagers.
The fast focus image is derived from the calculating the developed wavelet focus measure value of the horizontal high
pass image and the vertical high pass image of the Daubechies wavelet transformed image. The highest wavelet focus
measure value among them gives the best focus image directly. This focusing operation works finely similar to the other
differential image techniques.
We used the stereo matching operation between the binary blocked high pass images corresponding to the best focus
image. To construct the binary blocked high pass image, we apply the 8 directional adjacent pixel connection to the
binary high pass image. The group of the main block elements of the binary image can work as the appropriate matching
block.
The wide image and narrow image stereo matching operation on the binary high pass image give the correct matching. In
particular the narrow image stereo matching operation provides the common area of the right image and the left image.
For finding the surface we used the brightness variation of each pixel point through the stacked images for the focusing
operation. The kinds of the calculated brightness variations are the standard variation and the absolute deviation from the
average brightness on each pixel point. We applied the threshold to the variation and deviation to classify the image area into the mild variation brightness surface area and rough variation brightness surface area. The rough variation
brightness surface area covers the group of the main blocked elements in the binary image.
We have developed the wide and narrow dual image guidance system for ground vehicle on fast focusing and stereo
matching operation. The fast focusing catches the distance information of outside world. The stereo matching operation
on the focused two wide images finds the characteristic position to detect the fine distance information through the fast
focusing using the narrow images due to the camera with the long focal length.
Our fast focusing algorithm works precisely on the differential image such as the Daubechies wavelet transformed
high pass image, the Roberts image, Prewitt image, Sobel image and the Laplacian image.
After the stereo matching operation on the focused wide images, the two cameras serves the narrow image focusing
operation. This procedure establishes the reliability of the detection of the object and gives the fine image information of
the object. The pointing operation of the long focal length camera of the narrow image uses the related pixel address
information due to the stereo matching and the 2 axes gimbal equipment of the precise resolution.
We experimented the detection of the object by stereo matching and ranging the fine distance by narrow image
focusing. The experiment gives the appropriate detection and fine pointing of the narrow image focusing to meet the
guidance capability of the ground vehicle.
We have developed the dual camera image guidance system for autonomous vehicle based on the fast focusing and the
spot RGB spectrum similarity operation. The fast focusing catches the distance information of outside world as a whole.
The spot RGB spectrum similarity operation finds the object surface portion in the image.
Our fast focusing algorithm works precisely on the differential image such as the Daubechies wavelet transformed high
pass image, the Roberts image, Prewitt image, Sobel image and the Laplacian image.
The spot RGB spectrum similarity operation for the surface detection comes from the idea of the laser range finder. The
illuminated coherent laser reflects on the object surface and the reflected laser is detected on the spectrum band detector.
The RGB spectrum distribution on the selected spot on one camera can give the expected similar spectrum information
on the position-matched spot on another camera if the selected spot corresponds to the surface of the object.
We move the autonomous vehicle based on the distance detection and the surface detection of the outside world due to
the controlled dual color camera system. Our autonomous vehicle is equipped with the controllable independent four wheels drive. This vehicle can avoid the
object geometrically even if it is just in front of the object. We mount the dual camera image guidance system on two
axes jimbal system to aim the object in space.
We studied the infrared image guidance for ground vehicle based on the fast wavelet image focusing and tracking.
Here we uses the image of the uncooled infrared imager mounted on the two axis gimbal system and the developed new
auto focusing algorithm on the Daubechies wavelet transform.
The developed new focusing algorithm on the Daubechies wavelet transform processes the result of the high pass filter
effect to meet the direct detection of the objects. This new focusing gives us the distance information of the outside
world smoothly, and the information of the gimbal system gives us the direction of objects in the outside world to match
the sense of the spherical coordinate system.
We installed this system on the hand made electric ground vehicle platform powered by 24VDC battery. The electric
vehicle equips the rotary encoder units and the inertia rate sensor units to make the correct navigation process. The
image tracking also uses the developed newt wavelet focusing within several image processing.
The size of the hand made electric ground vehicle platform is about 1m long, 0.75m wide, 1m high, and 50kg weight.
We tested the infrared image guidance for ground vehicle based on the new wavelet image focusing and tracking using the electric vehicle indoor and outdoor. The test shows the good results by the developed infrared image guidance for
ground vehicle based on the new wavelet image focusing and tracking.
We have developed the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle based on the uncooled infrared camera and focusing technique to detect the objects to be evaded and to set the drive path. For this purpose we made servomotor drive system to control the focus function of the infrared camera lens. To
determine the best focus position we use the auto focus image processing of Daubechies wavelet transform technique
with 4 terms. From the determined best focus position we transformed it to the distance of the object. We made the aluminum frame ground vehicle to mount the auto focus infrared unit. Its size is 900mm long and 800mm wide. This vehicle mounted Ackerman front steering system and the rear motor drive system. To confirm the guidance ability of the Space Imaging Infrared Optical Guidance for Autonomous Ground Vehicle we had the experiments for the detection ability of the infrared auto focus unit to the actual car on the road and the roadside wall. As a result the auto focus image processing based on the Daubechies wavelet transform technique detects the best
focus image clearly and give the depth of the object from the infrared camera unit.
We have developed the Space Imaging Optical Guidance for Ground Vehicle that uses the narrow field view of the
Space Imaging Measurement System based on the fixed lens and fast moving detector to detect the objects to be evaded
and the wide field view of the visible optical fine camera to set the drive path.
In particular the angel between the optical axis of the narrow field view and the roadside object is very small. Therefore
we considered the image segmentation processing of the narrow field view. This provides the accurate detection of the
roadside object and its distance.
To confirm the guidance ability of the Space Imaging Optical Guidance for Ground Vehicle we tested the Space
Imaging Optical Guidance for Ground Vehicle on the road. The road has the object on the road and the roadside wall.
The Space Imaging Optical Guidance for Ground Vehicle can detect the object and the surface of the wall and its
distance.
We have developed the Space Imaging Measurement System based on the fixed lens and fast moving detector to the
control of the autonomous ground vehicle. The space measurement is the most important task in the development of the
autonomous ground vehicle.
In this study we move the detector back and forth along the optical axis at the fast rate to measure the three-dimensional
image data. This system is just appropriate to the autonomous ground vehicle because this system does not send out any
optical energy to measure the distance and keep the safety. And we use the digital camera of the visible ray range.
Therefore it gives us the cost reduction of the three-dimensional image data acquisition with respect to the imaging laser
system.
We can combine many pieces of the narrow space imaging measurement data to construct the wide range three-dimensional
data. This gives us the improvement of the image recognition with respect to the object space.
To develop the fast movement of the detector, we build the counter mass balance in the mechanical crank system of the
Space Imaging Measurement System. And then we set up the duct to prevent the optical noise due to the ray not coming
through lens. The object distance is derived from the focus distance which related to the best focused image data. The best focused
image data is selected from the image of the maximum standard deviation in the standard deviations of series images.
We develop an optical fiber imaging laser radar based on the focal plane array detection method using a small number of detectors less than the number of the focal plane array resolution. For the development of this kind of the focal array detection method, we produce the optical fiber dissector, the movable aperture, and the small-number parallel multichannel pulse counter receiver. The optical fiber dissector has one vertical cross section of the 35×35 optical fiber square array for the focal plane at one end and 25 vertical cross sections of the 25 optical fiber bundles for the 25-channel parallel multichannel pulse counter receiver at the other end. Each optical fiber bundle has the 49 optical fibers selected from the 35×35 optical fiber square array with no overlap. The movable aperture has a window of a size 5×5 optical fiber cross section to ensure no crosstalk for the detection of the divergent pulse laser beam. The divergent pulse laser beam is focused on some 5×5 area of the 35×35 optical fiber square array related to its scanning direction. The developed optical fiber imaging laser radar shows high range resolution and no-crosstalk angle resolution. The range resolution is under 15 cm. The angle resolution is 1 pixel.
We have been studying the Chemical Oxygen-Iodine Laser (COIL) Thermal Image Marker System to the far field objects. This system can mark the distinguishable thermal image on the far field objects with the laser beam of the COIL to guide the Imaging Infrared homing air vehicle to the object marked thermal image with pinpoint accuracy. For the development of this system the study of the COIL resonator is the main task to meet the generation of the required high quality laser beam.
Therefore, first we made two kinds of the experiments. One is to generate the distinguishable thermal image mark (TIM) on the object with stable resonator of the 13 kW output COIL system in the near field. Another is to improve the laser beam quality with the unstable resonators of the COIL system in the low gain condition. Then we studied the high power unstable resonator design for this system with the numerical simulation based on its experimental data and the two-dimensional Fresnel-Kirchhoff integration method with partially coherent scalar electric field. Finally we made the numerical far field TIM generation to verify the TIM generation with the laser beam of the studied high power unstable resonator. The result of simulation shows the fine TIM generation.
The result of the experiment and the resonator design study shows that it is possible to realize the good thermal image mark, the good quality laser beam and the promising unstable resonator for the COIL Thermal Image Marker System.
We have developed the Optical Fiber Imaging Laser Radar based on the focal plane array detection using the small number of detectors less than the number of the focal plane array resolution. For this focal array detection, first, we made the optical fiber dissector which has one vertical cross section of the 35 x 35 optical fibers square array at one end to receive the reflected laser pulse from an object and 25 vertical cross sections of the 7 x 7 optical fibers array extracted from the 35 x 35 optical fibers square array at 25 other ends to guide the dissected laser pulse to the 25 InGaAs photodiode pulse detectors of the 25 channels of the parallel pulse counter. The 7 x 7 optical fibers arrays are the mode (5,5) residual classes from the 35 x 35 optical fibers square array. Second, we shaped the most of Erbium doped optical fiber laser pulse into the laser pulse with the beam of the elliptic cross section that falls into one 5 x 5 area of the vertical cross section of the 35 x 35 optical fibers square array when it is received. And we made a mask with a window of a size 5 x 5 optical fibers’ cross section to ensure no cross talk in the receiver of the Optical Fiber Imaging Laser Radar. Then, we controlled the direction of the shaped laser pulse to scan and reconstruct the received data from the 25 channels of the parallel pulse counter to the actual order data of the Imaging Laser Radar. The developed Imaging Laser Radar shows that the image resolution between the range image and the object is within one pixel difference and that the range resolution is under 15cm.
We have developed a 1-Mpixel infrared charge sweep device (IRCSD) imager for thermal imaging in the 3- to 5-μm band. The device of this imager is a 1040 x 1040 monolithic PtSi Schottky-barrier (SB) array using the charge sweep device (CSD) readout architecture. The pixel size is 17 x 17 μm2 and the fill factor of this device is 44%. In this imager system, four video signals are read out from four independent channels on the device. The processing of these four outputs, such as sample and hold (S/H), and offset control and image correction, is performed in parallel, after which these outputs are combined to produce high-definition TV (HDTV; 1125 lines, 30 Hz) format thermal image in real time. The noise-equivalent temperature difference (NETD) with f/1.2 optics at 27°C background is 0.13°C at the HDTV output stage.
We have developed a `1M-IRCSD imager' for thermal imaging in the 3 to 5 micrometers band. The device of this imager is a 1040 X 1040 monolithic PtSi Schottky-barrier (SB) array using the charge sweep device (CSD) readout architecture. In this imager system, four video signals are read out from four independent channels on the chip. The processing of these four outputs, such as sample and hold (S/H), offset control and image correction, is performed in parallel, after which these outputs are combined to produce High Definition TV (HDTV; 1125 lines, 30 Hz) format thermal image in real time. The noise equivalent temperature difference (NETD) with F1.2 optics at 27 degree(s)C background is 0.13 degree(s)C at the HDTV output stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.