Hyperspectral imaging is beneficial for non-destructive agricultural inspections, and three-dimensional reconstruction modeling is a powerful tool for inspecting the phenotype of plants. This study proposes an approach to combine threedimensional reconstruction modeling and hyperspectral images into four-dimensional data. This data not only contains the three-dimensional structural information of an interesting object but also includes the spectral information of every point on the surface of this object. Firstly, the hyperspectral and visible images of an interesting object are acquired from hyperspectral and visible cameras. Secondly, high-resolution visible images are used to reconstruct a three-dimensional surface model of an interesting object. Thirdly, matching hyperspectral images with visible images establishes the correspondence between hyperspectral images and the three-dimensional model. Furthermore, the biomarker index can be derived from hyperspectral data. The biomarker index can be transformed into surface textures and combined with the three-dimensional model to form a three-dimensional biomarker model.
Image geo-localization estimates an image's global position by comparing it with a large-scale image database containing known positions. This localization technology can serve as an alternative positioning method for unmanned aerial vehicles (UAV) in situations where a global position system is unavailable. Feature-based image-matching methods typically involve descriptors constructed from pixel-level key points in the images. The number of descriptors in one image can be substantial. Filtering and comparing these large quantities of descriptors for image matching would be quite time-consuming. Due to the large scale of satellite images, matching them with aerial images using this method can be challenging to achieve in real-time. Thus, this paper proposes a semantic matching-based approach for real-time image geo-localization. The types, quantities, and geometric information of objects in satellite images are extracted and used as sematic-level descriptors. The sematic-level descriptors of an aerial image captured by UAV are extracted by an object recognition model. The quantity of semantic-level descriptors is orders of magnitude less than pixel-level descriptors. The location of the aerial image can be rapidly determined by matching the semantic-level descriptors between the aerial image and satellite images. In the experiments, the speeds of matching an aerial image with satellite images using the semantic matching and a feature-based matching method were 0.194 seconds per image and 125.68 seconds per image, respectively. Using semantic matching methods is 648 times faster than using feature matching methods. The results demonstrate that the proposed semantic matching methods have the potential for real-time image geo-localization.
To schedule fruit tree Syzygium samarangense to bloom and bear fruit is a challenging work, which requires highly experienced and knowledgeable professions. Specific amount of fertilizers should be supplemented at specific timing. Therefore, it is also a labor-intensive work. Our goal is to provide a method to automatically identify nutritional and growing conditions of Syzygium samarangense. In this work, we applied both multispectral and hyperspectral imaging techniques to measure parts of Syzygium samarangense, including branches, leaves, flowers, and fuiits. We examined several important spectral indexes: water content index, biomass content index, structure index, and chlorophyll content index. A custom-built hyperspectral imaging system was used here. The system includes two spectrographs, and two charge-coupled devices One of the spectrographs disperses light in visible wavelength, and the other spectrograph works in short wavelength infrared light region. With line-scanning data acquisition, the collected reflectance data included two-dimensional spatial image as well as reflectance spectrum. The hyperspectral imaging data were compared with results from a commercial multispectral imagery. The selected lightweight multispectral imagery was portable by an UAV. Among the spectral indexes we examined, data from the two techniques were highly linear correlated, indicating that data from multispectral imagery were sufficiently reliable for orchard management. On the other hand, additional spectral characteristics were only shown in hyperspectral imaging data. Calculated images that mapped hyperspectral indexes showed patterned of diseased area in leaves. Therefore, we built an on-site orchard monitoring procedure that combines both multispectral and hyperspectral imaging techniques for wax apple tree.
KEYWORDS: 3D modeling, Unmanned aerial vehicles, Sensors, RGB color model, Data modeling, Agriculture, 3D image processing, Control systems, Cameras, Systems modeling
As the Unmanned aerial vehicle (UAV) become ubiquitous, many early studies utilizing the UAV for three-dimensional (3D) modeling system are successively proposed as well. The advantage of using UAV successfully advances efficiency and reduces the cost time for simplifying the process of collecting data. Moreover, the characteristics of the light sized and suspension of new designed UAV could allow the users to gather the precise data from the rugged terrain and through a crowd of trees and may build a detail 3D and hyperspectral images. By those high-resolution 3D model data, the texture and shape of observation can be seen and brings research into further analysis. However, there are some limits that the information of 3D modeling data in RGB wide bands is not adequate to investigate vegetation research. The hyper spectrum collects the information of tens of bands with the narrow bandwidth could analyze the detail difference between abnormal and normal situations when the plant is growing. Therefore, how to combine hyperspectral data into a 3D modeling system would be important when researches want to attain the location data and spectral image simultaneously. In this study, the UAV will be loaded with two kinds of systems, and try to build the model include 3D modeling information and spectral information. As a result, it can monitor the growth condition of plants, their environment, also the precise location at the same time so that it can make vegetation analysis more complete.
A multi-band pass filter array was proposed and designed for short wave infrared applications. The central wavelength of the multi-band pass filters are located about 905 nm, 950 nm, 1055 nm and 1550 nm. In the simulation of an optical interference band pass filter, high spectrum performance (high transmittance ratio between the pass band and stop band) relies on (1) the index gap between the selected high/low-index film materials, with a larger gap correlated to higher performance, and (2) sufficient repeated periods of high/low-index thin-film layers. When determining high and low refractive index materials, spectrum performance was improved by increasing repeated periods. Consequently, the total film thickness increases rapidly. In some cases, a thick total film thickness is difficult to process in practice, especially when incorporating photolithography liftoff. Actually the maximal thickness of the photoresist being able to liftoff will bound the total film thickness of the band pass filter. For the application of the short wave infrared with the wavelength range from 900nm to 1700nm, silicone was chosen as a high refractive index material. Different from other dielectric materials used in the visible range, silicone has a higher absorptance in the visible range opposite to higher transmission in the short wave infrared. In other words, designing band pass filters based on silicone as a high refractive index material film could not obtain a better spectrum performance than conventional high index materials like TiO2 or Ta2O5, but also its material cost would reduce about half compared to the total film thickness with the conventional material TiO2. Through the simulation and several experimental trials, the total film thickness below 4 um was practicable and reasonable. The fabrication of the filters was employed a dual electric gun deposition system with ion assisted deposition after the lithography process. Repeating four times of lithography and deposition process and black matrix coating, the optical device processes were completed.
A hyperspectral imaging system is proposed for early study of skin diagnosis. A stable and high hyperspectral image quality is important for analysis. Therefore, a light guide sleeve (LGS) was designed for the embedded on a hyperspectral imaging system. It provides a uniform light source on the object plane with the determined distance. Furthermore, it can shield the ambient light from entering the system and increasing noise. For the purpose of producing a uniform light source, the LGS device was designed in the symmetrical double-layered structure. It has light cut structures to adjust distribution of rays between two layers and has the Lambertian surface in the front-end to promote output uniformity. In the simulation of the design, the uniformity of illuminance was about 91.7%. In the measurement of the actual light guide sleeve, the uniformity of illuminance was about 92.5%.
Oral cancer is a serious and growing problem in many developing and developed countries. To improve the cancer screening procedure, we developed a portable light-emitting-diode (LED)-induced autofluorescence (LIAF) imager that contains two wavelength LED excitation light sources and multiple filters to capture ex vivo oral tissue autofluorescence images. Compared with conventional means of oral cancer diagnosis, the LIAF imager is a handier, faster, and more highly reliable solution. The compact design with a tiny probe allows clinicians to easily observe autofluorescence images of hidden areas located in concave deep oral cavities. The ex vivo trials conducted in Taiwan present the design and prototype of the portable LIAF imager used for analyzing 31 patients with 221 measurement points. Using the normalized factor of normal tissues under the excitation source with 365 nm of the central wavelength and without the bandpass filter, the results revealed that the sensitivity was larger than 84%, the specificity was not smaller than over 76%, the accuracy was about 80%, and the area under curve of the receiver operating characteristic (ROC) was achieved at about 87%, respectively. The fact shows the LIAF spectroscopy has the possibilities of ex vivo diagnosis and noninvasive examinations for oral cancer.
Oral cancer is one of the serious and growing problem in many developing and developed countries. The simple oral visual screening by clinician can reduce 37,000 oral cancer deaths annually worldwide. However, the conventional oral examination with the visual inspection and the palpation of oral lesions is not an objective and reliable approach for oral cancer diagnosis, and it may cause the delayed hospital treatment for the patients of oral cancer or leads to the oral cancer out of control in the late stage. Therefore, a device for oral cancer detection are developed for early diagnosis and treatment. A portable LED Induced autofluorescence (LIAF) imager is developed by our group. It contained the multiple wavelength of LED excitation light and the rotary filter ring of eight channels to capture ex-vivo oral tissue autofluorescence images. The advantages of LIAF imager compared to other devices for oral cancer diagnosis are that LIAF imager has a probe of L shape for fixing the object distance, protecting the effect of ambient light, and observing the blind spot in the deep port between the gumsgingiva and the lining of the mouth. Besides, the multiple excitation of LED light source can induce multiple autofluorescence, and LIAF imager with the rotary filter ring of eight channels can detect the spectral images of multiple narrow bands. The prototype of a portable LIAF imager is applied in the clinical trials for some cases in Taiwan, and the images of the clinical trial with the specific excitation show the significant differences between normal tissue and oral tissue under these cases.
The pupil response to light can reflect various kinds of diseases which are related to physiological health. Pupillary abnormalities may be influenced on people by autonomic neuropathy, glaucoma, diabetes, genetic diseases, and high myopia. In the early stage of neuropathy, it is often asymptomatic and difficulty detectable by ophthalmologists. In addition, the position of injured nerve can lead to unsynchronized pupil response for human eyes. In our study, we design the pupilometer to measure the binocular pupil response simultaneously. It uses the different wavelength of LEDs such as white, red, green and blue light to stimulate the pupil and record the process. Therefore, the pupilometer mainly contains two systems. One is the image acquisition system, it use the two cameras modules with the same external triggered signal to capture the images of the pupil simultaneously. The other one is the illumination system. It use the boost converter ICs and LED driver ICs to supply the constant current for LED to maintain the consistent luminance in each experiments for reduced experimental error. Furthermore, the four infrared LEDs are arranged nearby the stimulating LEDs to illuminate eyes and increase contrast of image for image processing. In our design, we success to implement the function of synchronized image acquisition with the sample speed in 30 fps and the stable illumination system for precise measurement of experiment.
The type of illumination systems and color filters used typically generate varying levels of color difference in capsule endoscopes, which influence medical diagnoses. In order to calibrate the color difference caused by the optical system, this study applied a radial imaging capsule endoscope (RICE) to photograph standard color charts, which were then employed to calculate the color gamut of RICE. Color gamut was also measured using a spectrometer in order to get a high-precision color information, and the results obtained using both methods were compared. Subsequently, color-correction methods, namely polynomial transform and conformal mapping, were used to improve the color difference. Before color calibration, the color difference value caused by the influences of optical systems in RICE was 21.45±1.09. Through the proposed polynomial transformation, the color difference could be reduced effectively to 1.53±0.07. Compared to another proposed conformal mapping, the color difference value was substantially reduced to 1.32±0.11, and the color difference is imperceptible for human eye because it is <1.5. Then, real-time color correction was achieved using this algorithm combined with a field-programmable gate array, and the results of the color correction can be viewed from real-time images.
The image fusion is combination of two or more images into one image. The fusion of multi-band spectral images has been in many applications, such as thermal system, remote sensing, medical treatment, etc. Images are taken with the different imaging sensors. If the sensors take images through the different optical paths in the same time, it will be in the different positions. The task of the image registration will be more difficult. Because the images are in the different field of views (F.O.V.), the different resolutions and the different view angles. It is important to build the relationship of the viewpoints in one image to the other image. In this paper, we focus on the problem of image registration for two non-pinhole sensors. The affine transformation between the 2-D image and the 3-D real world can be derived from the geometrical optics of the sensors. In the other word, the geometrical affine transformation function of two images are derived from the intrinsic and extrinsic parameters of two sensors. According to the affine transformation function, the overlap of the F.O.V. in two images can be calculated and resample two images in the same resolution. Finally, we construct the image registration model by the mapping function. It merges images for different imaging sensors. And, imaging sensors absorb different wavebands of electromagnetic spectrum at the different position in the same time.
In recent years, transparent display is an emerging topic in display technologies. Apply in many fields just like mobile device, shopping or advertising window, and etc. Electrowetting Display (EWD) is one kind of potential transparent display technology advantages of high transmittance, fast response time, high contrast and rich color with pigment based oil system. In mass production process of Electrowetting Display, oil defects should be found by Automated Optical Inspection (AOI) detection system. It is useful in determination of panel defects for quality control. According to the research of our group, we proposed a mechanism of AOI detection system detecting the different kinds of oil defects. This mechanism can detect different kinds of oil defect caused by oil overflow or material deteriorated after oil coating or driving. We had experiment our mechanism with a 6-inch Electrowetting Display panel from ITRI, using an Epson V750 scanner with 1200 dpi resolution. Two AOI algorithms were developed, which were high speed method and high precision method. In high precision method, oil jumping or non-recovered can be detected successfully. This mechanism of AOI detection system can be used to evaluate the oil uniformity in EWD panel process. In the future, our AOI detection system can be used in quality control of panel manufacturing for mass production.
The difference of spectral distribution between lesions of epithelial cells and normal cells after excited fluorescence is one of methods for the cancer diagnosis. In our previous work, we developed a portable LED Induced autofluorescence (LIAF) imager contained the multiple wavelength of LED excitation light and multiple filters to capture ex-vivo oral tissue autofluorescence images. Our portable system for detection of oral cancer has a probe in front of the lens for fixing the object distance. The shape of the probe is cone, and it is not convenient for doctor to capture the oral image under an appropriate view angle in front of the probe. Therefore, a probe of L shape containing a mirror is proposed for doctors to capture the images with the right angles, and the subjects do not need to open their mouse constrainedly. Besides, a glass plate is placed in probe to prevent the liquid entering in the body, but the light reflected from the glass plate directly causes the light spots inside the images. We set the glass plate in front of LED to avoiding the light spots. When the distance between the glasses plate and the LED model plane is less than the critical value, then we can prevent the light spots caused from the glasses plate. The experiments show that the image captured with the new probe that the glasses plate placed in the back-end of the probe has no light spots inside the image.
The intraocular pressure (IOP) that can diagnose or track glaucoma generally because it is one of the physiology parameters that are associated with glaucoma. But IOP is not easy and consistence to be measured under different measure conditions. Besides, diabetes is associated with diabetic autonomic neuropathy (DAN). Pupil size response might provide an indirect means about neuronal pathways, so the abnormal pupil size may relate with DAN. Hence an infrared videopupillography is needed for tracking glaucoma and exploring the relation between pupil size and DAN. Our previous research proposed an infrared videopupillography to monitoring the pupil size of different light stimulus in dark room. And this portable infrared videopupillography contains a camera, a beam splitter, the visible-light LEDs for stimulating the eyes, and the infrared LEDs for lighting the eyes. It can be mounted on any eyeglass frame. But it can modulate only two dimensions, we cannot zoom in/out the eyes. Moreover, the eye diameter curves were not smooth and jagged because of the light spots, lone eyelashes, and blink. Therefore, we redesign the optical path of our device to have three dimension modulation. Then we can zoom in the eye to increase the eye resolution and to avoid the LED light spots. The light spot could be solved by defining the distance between IR LED and CCD. This device smaller volume and less prices of our previous videopupillography. We hope this new infrared videopupillography proposed in this paper can achieving early detection about autonomic neuropathy in the future.
The technology of electrowetting display (EWD) is the most important method for the traditional displays that can work more efficiently. When the voltage drives, the aperture ratio of the ink will reach 75% and the transmittance can reach 60%. Furthermore, the EWD technology has the advantages such as high transmittance, high switching speed, color performance, low power consumption, and etc. They make the advances of technology development for the transparent displays. However, due to the diffraction phenomenon resulted from periodic pixel structures, when the users observe the background object through the transparent display, the transmitted image will be blurred. In this paper, we recognized the problems by the simulation and constructed the optical model first. In order to avoid the diffraction, we use micro lens array to prevent the rays interfere on the micro structure, so that it will not produce the destructive and constructive interference, so the diffraction effect can be reduced. The micro lens array avoid the light touches the outer frame of EWD pixels. The simulations are simulate at different distance, and the distance of diffraction width is condensed to 91% with respect to the origin. In the future, this concept can apply in other transmitted images of transparent displays.
Recently, hyperspectral imaging (HSI) systems, which can provide 100 or more wavelengths of emission autofluorescence measures, have been used to delineate more complete spectral patterns associated with certain molecules relevant to cancerization. Such a spectral fingerprint may reliably correspond to a certain type of molecule and thus can be treated as a biomarker for the presence of that molecule. However, the outcomes of HSI systems can be a complex mixture of characteristic spectra of a variety of molecules as well as optical interferences due to reflection, scattering, and refraction. As a result, the mixed nature of raw HSI data might obscure the extraction of consistent spectral fingerprints. Here we present the extraction of the characteristic spectra associated with keratinized tissues from the HSI data of tissue sections from 30 oral cancer patients (31 tissue samples in total), excited at two different wavelength ranges (330 to 385 and 470 to 490 nm), using independent and principal component analysis (ICA and PCA) methods. The results showed that for both excitation wavelength ranges, ICA was able to resolve much more reliable spectral fingerprints associated with the keratinized tissues for all the oral cancer tissue sections with significantly higher mean correlation coefficients as compared to PCA (p<0.001 ).
This study presents the portable multispectral imaging system that can acquire the image of specific spectrum in vivo for oral cancer diagnosis. According to the research literature, the autofluorescence of cells and tissue have been widely applied to diagnose oral cancer. The spectral distribution is difference for lesions of epithelial cells and normal cells after excited fluorescence. We have been developed the hyperspectral and multispectral techniques for oral cancer diagnosis in three generations. This research is the third generation. The excited and emission spectrum for the diagnosis are acquired from the research of first generation. The portable system for detection of oral cancer is modified for existing handheld microscope. The UV LED is used to illuminate the surface of oral cavity and excite the cells to produce fluorescent. The image passes through the central channel and filters out unwanted spectrum by the selection of filter, and focused by the focus lens on the image sensor. Therefore, we can achieve the specific wavelength image via fluorescence reaction. The specificity and sensitivity of the system are 85% and 90%, respectively.
Glaucoma was diagnosed or tracked by the intraocular pressure (IOP) generally because it is one of the physiology
parameters that are associated with glaucoma. But measurement of IOP is not easy and consistence under different
measure conditions. An infrared videopupillography is apparatus to monitor the pupil size in an attempt to bypass the
direct IOP measurement. This paper propose an infrared videopupillography to monitoring the pupil size of different
light stimulus in dark room. The portable infrared videopupillography contains a camera, a beam splitter, the visible-light
LEDs for stimulating the eyes, and the infrared LEDs for lighting the eyes. It is lighter and smaller than the present
product. It can modulate for different locations of different eyes, and can be mounted on any eyeglass frame. An analysis
program of pupil size can evaluate the pupil diameter by image correlation. In our experiments, the eye diameter curves
were not smooth and jagged. It caused by the light spots, lone eyelashes, and blink. In the future, we will improve the
analysis program of pupil size and seek the approach to solve the LED light spots. And we hope this infrared
videopupillography proposed in this paper can be a measuring platform to explore the relations between the different
diseases and pupil response.
The holographic data storage system (HDSS) is a page-oriented storage system with advantages of great capacity and
high speed. The page-oriented recording breaks the tradition of the optical storage of one-point recording. As the
signal image is retrieved from the storage material in the HDSS, various noises influences the image and then the
data retrieve will be difficultly from the image by using the thresholding method. For progressing on the thresholding
method, a recognition method, based on the structural similarity, is proposed to replace the thresholding method in
the HDSS. The recognition method is implemented that the image comparison between the receive image and
reference image is performed by the structural similarity method to find the most similar reference image to the
received image. In the experiment, by using recognition method, the bit error rate (BER) results in 26% decrease less
than using the thresholding method in the HDSS. Owing to some strong effects, such as non-uniform intensity and
strong speckle, still influencing on the received image, the recognition method is seemed to be slightly better than
thresholding method. In the future, the strong effects would be reduced to improve the quality of the receive image
and then the result of using the recognition method may be vastly better than the thresholding method.
Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can’t capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.
This study concerns the illumination system in a radial imaging capsule endoscope (RICE). Uniformly illuminating the object is difficult because the intensity of the light from the light emitting diodes (LEDs) varies with angular displacement. When light is emitted from the surface of the LED, it first encounters the cone mirror, from which it is reflected, before directly passing through the lenses and complementary metal oxide semiconductor (CMOS) sensor. The light that is strongly reflected from the transparent view window (TVW) propagates again to the cone mirror, to be reflected and to pass through the lenses and CMOS sensor. The above two phenomena cause overblooming on the image plane. Overblooming causes nonuniform illumination on the image plane and consequently reduced image quality. In this work, optical design software was utilized to construct a photometric model for the optimal design of the LED illumination system. Based on the original RICE model, this paper proposes an optimal design to improve the uniformity of the illumination. The illumination uniformity in the RICE is increased from its original value of 0.128 to 0.69, greatly improving light uniformity.
This study investigates image processing using the radial imaging capsule endoscope (RICE) system. First, an experimental environment is established in which a simulated object has a shape that is similar to a cylinder, such that a triaxial platform can be used to push the RICE into the sample and capture radial images. Then four algorithms (mean absolute error, mean square error, Pearson correlation coefficient, and deformation processing) are used to stitch the images together. The Pearson correlation coefficient method is the most effective algorithm because it yields the highest peak signal-to-noise ratio, higher than 80.69 compared to the original image. Furthermore, a living animal experiment is carried out. Finally, the Pearson correlation coefficient method and vector deformation processing are used to stitch the images that were captured in the living animal experiment. This method is very attractive because unlike the other methods, in which two lenses are required to reconstruct the geometrical image, RICE uses only one lens and one mirror.
Currently, the cancer was examined by diagnosing the pathological changes of tumor. If the examination of cancer can diagnose the tumor before the cell occur the pathological changes, the cure rate of cancer will increase. This research develops a human-machine interface for hyper-spectral microscope. The hyper-spectral microscope can scan the specific area of cell and records the data of spectrum and intensity. These data is helpful to diagnose tumor. This study finds the hyper-spectral imaging have two higher intensity points at 550nm and 700nm, and one lower point at 640nm between the two higher points. For analyzing the hyper-spectral imaging, the intensity at the 550nm peak divided by the intensity at 700nm peak. Finally, we determine the accuracy of detection by Gaussian distribution. The accuracy of detecting normal cells achieves 89%, and the accuracy of cancer cells achieves 81%.
This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial
imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured
the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center
and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it.
Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the
images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.
Until now, the cancer was examined by diagnosing the pathological changes of tumor. If the examination of cancer can
diagnose the tumor before the cell occur the pathological changes, the cure rate of cancer will increase. This research
develops a human-machine interface for hyper-spectral microscope. The hyper-spectral microscope can scan the specific
area of cell and records the data of spectrum and intensity. These data is helpful to diagnose tumor.
This research aims to develop a new system and a human-machine interface to control the hyper-spectral microscope.
The interface can control the moving speed of motor, the
exposure-time of hyper-spectrum, real-time focus, image of
fluorescence, and record the data of spectral intensity and position.
KEYWORDS: Endoscopes, Light emitting diodes, Mirrors, Intestine, 3D image reconstruction, Monte Carlo methods, Geometrical optics, Light sources, LED lighting, Endoscopy
This paper is researching about the illumination system in ring field capsule endoscope. It is difficult to obtain the
uniform illumination on the observed object because the light intensity of LED will be changed along its angular
displacement and same as luminous intensity distribution curve. So we use the optical design software which is
Advanced Systems Analysis Program (ASAP) to build a photometric model for the optimal design of LED illumination
system in ring field capsule endoscope. In this paper, the optimal design of illumination uniformity in the ring field
capsule endoscope is from origin 0.128 up to optimum 0.603 and it would advance the image quality of ring field capsule
endoscope greatly.
KEYWORDS: Video, Image quality, Human vision and color perception, Factor analysis, Image analysis, Image processing, Video processing, Image quality standards, Wavelets, Signal processing
Several estimative factors of image quality have been developed for approaching the human perception objectively1-3. We propose to take systematically distorted videos into the estimative factors and analyze the relationship between them. Several types of noise and noise weight were took into COSME standard video and verified the image quality estimative factors which were MSE (Mean Square Error), SSIM (Structural SIMilarity), CWSSIM (Complex Wavelet SSIM), PQR (Picture Quality Ratings) and DVQ (Digital Video Quality). The noise includes white noise, blur and luminance...etc. In the results, CWSSIM index has higher sensitivity at image structure and it could estimate the distorted videos which have the same noise type at the different levels. PQR is similar to CWSSIM, but the ratings of distribution were banded together; SSIM index divides the noise types into two groups and DVQ has linear relationship with MSE in the logarithmic scale.
KEYWORDS: Color reproduction, Eye, Image quality, Electrical engineering, Medicine, Visual system, Visualization, Digital image processing, Current controlled current source, 3D image processing
The index for evaluating the ability of color reproduction is required. The color distribution index (CDI) was proposed to
comment the display ability of color distribution of reproduction in CIE Lu'v' color space. A cell of Just Noticeable
Difference (JND) for luminance and chromaticity (u'v') was proposed to qualify whether the reproduced colors are in
some region of color volume of display. Human eye can perceive fewer colors at low luminance, however, the scalar of
chromaticity (u'v') JND at low luminance was the same with the one at other luminance. CDI will be distorted at low
luminance. In this paper, regarding perceptible vision at low luminance, we try to use chromaticity (a*b*) JND to replace
chromaticity (u'v') JND. The color distribution will be discussed in CIE La*b* color space. We find that CDI at low
luminance in CIE L*a*b* color space is higher than in CIE Lu'v' color space, as well as different gamma curves and
different bit depths affect CDI. The displays are going to keep approaching 100% true color reproduction; hence the
index for evaluating the ability of color reproduction is required.
Generally, the instrument of color measurement can be divided into spectrophotometer and color
meter. The former instrument use prism or grating to separate the light, it can achieve high
accuracy but a higher price. The latter instrument use color filter, however there is no spectrum
information with it. This article establishes a color measuring system and uses eigen-spectrum
method in double light sources to calibrate the spectrum. The measuring system includes
tri-stimulus sensors which were made by color filter. The tungsten lamp and Xenon lamp are used
to be light source. The advantage of this measuring system is the higher accuracy and the lower
cost. The eigen-spectrum method can calibrate the spectrum in less eigenvector. This method used
singular value deposition to obtain basis function of spectrum set, which can be obtained by
measuring. Because the range of the spectrum set was 380nm to 780nm, the eigenvector per
nanometer from 380nm to 780nm can be obtained. In general, the color spectrum can be obtained
with less eigenvector. The color difference in L*a*b* color space from 31.2398 down to 2.48841,
and reconstructs the spectrum information.
The purpose of color measuring instrument is judging the color information by scientific method, which may
instead of the human's eyes. Generally, the instruments of color measuring have two kinds, spectrophotometer
and color meter. The former measures spectrum by usage of prism or grating to separate the light, this could
achieve high accuracy but with a higher price. The latter obtains tristimulus from color filter; however there is no
spectrum information with it. This article establishes a color measuring system and uses eigenspectrum method
to correct the average inaccuracy. The measuring system includes tristimulus sensors which were made by color
filter, and Xenon lamp as light source. The advantage of this measuring system is the higher accuracy and the
lower cost. The eigenspectrum method can correct the average inaccuracy in less eigenvector, which can save
the time. This method used singular value deposition to obtain basis function of spectrum set, which can be
obtained by measuring. Because the range of the spectrum set was 380nm to 780nm, the eigenvector per
nanometer from 380nm to 780nm can be obtained. In general, the color spectrum can be obtained with less
eigenvector. This article establishes a color measuring system, which has three sensors and uses Xenon lamp as
light source, to acquire the color spectral reflectance. This article also uses the eigenspectrum methods to
correct the average color difference in L*a*b* color space,which from 31.2398 down to 4.8401, and reconstructs
the spectrum information.
KEYWORDS: Black bodies, Light sources, Color difference, CIE 1931 color space, Light sources and illumination, Body temperature, Curium, RGB color model, LCDs, Manufacturing
Color temperature(CT) conversion of triprimary color display from one white point to another on the Planckian's locus
with the maximal brightness has been proposed. However, whether converting an original white point to another white
point on the isotemperature line will enlarge maximal brightness more than converting an original white point to another
white point on the Planckian's locus needs to be determined. This paper proposes a new algorithm to enlarge maximal
brightness by calling the center of gravity method of color mixing in the acceptable color difference range while the CT
is converted. From the prior study, we find that the apexes of color gamut boundary move along the line of center gravity
of primaries while the total brightness varies, where the line of center gravity of primaries is linked by the color points
mixed by two or more full primary colors and one partial primary color. And in CIE 1931 color space, the color gamut
boundary expanding from white point as total brightness decreasing will touch the isotemperature line with its apexes.
Therefore, the best point of CT conversion of tri-primary color display with more maximal brightness is determined by
the isotemperature line and the line of center gravity of primaries. Further, the theory extends to multi-primary color
displays. Lastly, the simulations prove that converting a white point to another on the isotemperature line enlarges
maximal brightness more than converting a white point to another on Planck's locus.
A new feedback readout circuit of microbolometers for sensing radiant power is proposed in this paper. Due to excellent
thermal characteristics of microstructure on infrared application recently, the readout circuits of the microsensors would
not concern the responsivity only, but should also take offset voltage cancellation, digitalization, and signal-to-noise ratio
under considerations. Although Wheatstone bridge readout circuit has been widely used in resistive thermal sensor
readout for several decades, its nonlinear output voltage acting as the offset voltage still perplex us, as well as its
digitalization and signal-to-noise ratio could be unsatisfied for microbolometer applications. Hence, we present the
feedback readout which could optimize the key factors simultaneously and increase the responsivity without any layout
modification of the bridge structure on Infrared Focal Plane Array (IRFPA) microbolometer chip. The results revealed
that the balanced parameter, frequency, equal to 0.5 would be the best condition for these requirements instead of the
balanced parameter equal to unity by intuition traditionally. Compared to traditional Wheatstone bridge readout circuit,
the feedback readout circuit would improve the responsivity of 2.86 times, immunize the offset voltage exactly, obtain a
very large OVRR, and reduce the noise of the readout circuit of 5.6 dB. These significantly important results will
improve significantly the performance of the readout circuit, and speed up the commercialization of infrared focal plane
array of microbolometers.
Enhancement and duration of the projector luminance play two key roles in the difficulties of existing projectors. The paper proposes an optimized and novel dual-lamp illumination module, whose designs would be based not only on easy exchange with lamps in existing projector systems, but also on imposing minimal visual burdens on students in conventional classrooms. Applying these requirements to engineering specifications, the illumination module should comply with the following conditions. The total flux on the screen is larger than 1200 lm. The lifetime is longer than 2 years. The lamp module is easily attached to the optical engines in known mass-produced projectors. In order to achieve these conditions, the ray aberrations of burners and the structure of the dual-lamp module are presented and analyzed for optimization. Simulation and experimental results prove that the luminance of the module can be increasesd by about 1.47 times over that of a conventional projector under the stated requirements. Finally, a collateral-type dual-lamp module is also proposed for the further improvement. The simulation results show a further increase of 1.58 times in luminance over the single-lamp module and 11% over the dual-lamp module.
Research of depth of field (DOF) for capsule endoscope is important for the reason that the shapes of the
object plane of the intestine or the stomach are curve surfaces of "<" shape or "c" shape. The depth of field is
dependent on following factors: focal length, circle of confusion, aperture, and subject distance. The first
three factors are improved for wide view angle in prior paper and determined by the chosen sensor, and it is
not going against depth of field. Last factor, subject distance, is the more freedom to enlarge the depth of field.
However, depth of field is the range between near depth of field limit and far depth of field limit that are
acceptably sharp. The fraction of the depth of field behind the focus is always large then the one in front of
the focus distance. The depth of field does change with object distance, and it is increasing as object distance
is increasing. But the object distance of the design for capsule endoscope is short. The object distance setting
in front of the dorm is more efficient to use the depth of field than the one setting at the dome top. Therefore
there is an appropriate design of object distance to make depth of field be used efficiently to inspect curve
surface of intestine and stomach. The more vision information of inspect digestive system is get and is
compared easily to diagnose patients' condition under wide and efficient range of depth of field.
Recently, color reproduction stages are developed greatly, such as liquid-crystal displays, LCD TV, LCD
projectors, DLP projectors, and etc. Wide-color-gamut displays are distinguishing feature of many display
manufacturers. Many researches about multi-primary color displays are proposed, but there are still some
problems which are not solved. This study proposed a novel multi-primary projection display system using
two projectors. One of the two projectors is modified by changing two dichroic mirrors inside. The modified
projector is combined with the other to a new six-primary color display. This study applies equal-luminance
boundary theorem to construct gamut volume and evaluates the merit between gamut volume and brightness.
By this method, the cut-off wavelength of dichroic mirrors can be found out. In the past, to align the images
of the two projectors is pre-distorted to compensate the trapezoidal distortion. This study proposes to
eliminate trapezoidal distortion by using the offset of the projector. This study directly changes dichroic
mirrors to maintain the brightness and contrast, and solves lower brightness and contrast resulted from adding
filters before. Additionally, this study uses a reflection mirror to twist projection path and also constructs a
stage to align projection images more accurately.
Use of the capsule endoscope to inspect the digestive system, and in particular the intestine, for pathological change has recently been a great breakthrough in medical engineering. Some problems, however, need to be overcome. The field of view is not wide enough, and the image quality is not good enough. These drawbacks cause unclear and ambiguous digestive disease examinations by medical professionals. In order to solve these problems, the paper proposes a novel design for miniature lenses with a wide-angle field of view and good imaging quality. The lenses employed in the capsule endoscope would consist of a plastic aspheric lens and a glass lens, in a capsule 9.8 mm (W)×9.8 mm (L)×10.7 mm (H), associated with white LED light sources and a 256×256 CMOS array sensor of 10-µm pixel size. Experimental results prove that the lens field of view can be made as large as 86 deg, and the modulation transfer function for field of view 0 deg can be made 78.2% at 50-lp/mm spatial frequency and 53.3% at 100 lp/mm. Tolerance analysis shows that our design is feasible for manufacture, and consistent with the finished prototype.
Distortion exists in the present capsule endoscope image resulting from the confined space and the wide-angle
requirement [8]. Based on the previous two lens works, the optimal design had obtained that the field of view was about
86 degrees , and MTF was about 18% at 100 lp/mm, but distortion would go to -26%. It's difficult to add another lens on
the 7mm optical path between the dome and imaging lenses for improving distortion. In order to overcome this problem,
we intend to design the optical dome as another optical lens. The original dome is transparent and has an equal thickness,
namely without refracting light almost. Our objective in this paper is to design the inner curvature of the dome and
associate two aspheric imaging lenses in front of the CMOS sensors to advance the distortion with maintaining field of
view and MTF under the same capsule volume. Furthermore, the paper proposes the real object plane of intestine is
nearly a curved surface rather than an ideal flat surface. Taking those reasons under consideration, we design three
imaging lenses with curved object plane and obtain the field of view is about 86 degrees , MTF is about 26% at 100
lp/mm, and the distortion improve to -7.5%. Adding the dome lens is not only to enhance the image quality, but also to
maintain the tiny volume requirement.
Developing the luminous system in a capsule endoscope, it is difficult to obtain an uniform illumination[1] on the
observed object because of several reasons: the light pattern of LED is sensitively depend on the driving current, location
and projective angles; the optical path of LED light source is not parallel to the optical axis of the nearby imaging lenses;
the strong reflection from the inner surface of the dome may saturate the CMOS sensors; the object plane of the
observed intestine is not flat. Those reasons induce the over-blooming and deep-dark contrast in a picture and distort the
original image strongly. The purpose of the article is to construct a photometric model to analyze the LED projection
light pattern, and, furthermore, design a novel multiple LEDs luminous system for obtaining an uniform-brightness
image. Several key parameters resulting as illumination uniformity has been taken under the model consideration and
proven by experimental results. Those parameters include LED light pattern accuracy, choosing LED position relative to
the imaging optical axis, LED numbers, arrangement, and the inner curvature of the dome. The novel structure improves
the uniformity from 41% to 71% and reduces the light energy loss under 2%. The progress will help medical
professionals to diagnose diseases and give treatment precisely based on the vivid image.
A continuous-wave phase-shift laser range finder employs a novel multimodulation frequency method associating an undersampling analog-digital converter (ADC) with digital synchronous detection. This presentation greatly improves measured phase accuracy and reduces prior art scheme complexities. The novel patented design includes one phase-lock-loop (PLL) chip to produce a multimodulation frequency, one analog-to-digital converter operating at a low sampling rate, and an effective algorithm to calculate the final distance, which has encoded computing codes, and is implemented into compact computing circuits but without mixers and redundant components. The experimental results prove that a nonambiguity range is easily achieved to 1.5 Km when the modulation frequency is operated at 0.1 MHz. The measured accuracy approaches 2.9 mm using the same apparatus when the modulation frequency is tuned to 14.5 MHz. Dynamic range can reach 5.2×105 without a very high modulation frequency below 15 MHz, as revealed by a detailed analysis.
In recent years, using the capsule endoscope to inspect the pathological change of digestive system and intestine had a great break-through on the medical engineering. However, there are some problems needs to overcome. One is that, the field of view was not wide enough, and the other is that the image quality was not enough well. The drawbacks made medical professionals to examine digestive diseases unclearly and ambiguously. In order to solve these problems, the paper designed a novel miniature lenses which has a wide angle of field of view and a good quality of imaging. The lenses employed in the capsule endoscope consisted of a piece of plastic aspherical lens and a piece of glass lens and compacted in the 9.8mm (W) *9.8mm (L) *10.7mm (H) size. Taking the white LED light source and the 10μm pixel size of 256*256 CMOS sensor under considerations, the field of view of the lenses could be achieved to 86 degrees, and the MTF to 37% at 50lp/mm of space frequency. The experimental data proves that the design is consistent with the finished prototype.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.