PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper describes the formation and the processing of images from ocular tissue. Submicron optical serial sections of the cornea and the ocular lens were obtained using both the laser scanning confocal microscope and the Nipkow disk Tandem scanning microscope. The laser scanning confocal microscope used a photomultiplier tube as the detector. The real-time tandem scanning confocal light microscope used a cooled charge-coupled device (CCD) as the detector. Both confocal imaging systems are compared. The laser scanning confocal microscope used Kalman averaging to reduce the noise of the images. The real-time tandem scanning confocal microscope used a cooled CCD to integrate the image for 5 seconds in order to reduce image noise. The sample was a live, enucleated rabbit eye. The cornea and the ocular lens are almost transparent and have extremely low contrast. The images obtained with the two confocal systems show sub-micron resolution in the image plane. These confocal light microscope provide high resolution, high contrast images of living ocular tissue. The image quality of the resulting confocal images rivals that obtained from electron microscope of fixed, stained, and coated tissue specimens. Examples are given of simple digital image processing operations to alter the image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The computational reconstruction of surface topographies from Scanning Electron Microscope (SEM) images has been extensively investigated in the past but still fundamental image processing problems remain. Since conventional approaches adapted from general purpose image processing have not sufficiently met the requirements in terms of resolution and reliability the idea came up, to combine different methods to obtain better results. Stereoscopy evaluates stereo pairs of images to determine the three-dimensional surface topography for those parts in the image showing sufficient texture. This provides very accurate depth information since precisely known geometrical relations are involved to determine depth from perspective shift. 'Shape from shading' determines the three-dimensional surface orientation by the analysis of the local surface luminosity of the specimen. In this way 'shape from shading' provides additional depth information allowing to detect false stereo matches and to fill the gaps between the stereo data with true topographical information. Results will be presented showing how a combined analysis of multi-sensorial data yields improvements of the reconstructed surface topography which could not be obtained from the individual sensor signals alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In classical stereology we apply geometrical probes in 3-D by cutting the object physically into thin sections and then applying a 2-D grid to the section. Depending on the dimensionality of the grid (Points, lines, surfaces) we can obtain unbiased estimates of volume surface and length. This process necessarily destroys the specimen. With the confocal microscope we have a tool which can non-destructively interrogate the microstructure of objects with geometrical probes of between 0- and 3-dimensions, repeatedly if indicated. This is leading to new ideas in the field of stereology, in particular with the measurement of 0-dimensional properties such as particle number, spatial distribution and connectivity. These ideas and key references are given herein.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The digital processing of electron microscopic images from serial sections containing laser-induced topographical references allows a 3-D reconstruction at a depth resolution of 30 to 40 nm of entire cells by the use of image analysis methods, as already demonstrated for Transmission Electron Microscopy (TEM) coupled with a video camera. We decided to use a Scanning Transmission Electron Microscope (STEM) to get higher contrast and better resolution at medium magnification. The scanning of our specimens at video frequencies is an attractive and easy way to link a STEM with an image processing system but the hysteresis of the electronic spools responsible for the magnetic deviation of the scanning electron beam induces deformations of images which have to be modelized and corrected before registration. Computer algorithms developed for image analysis and treatment correct the artifacts caused by the use of STEM and by serial sectioning to automatically reconstruct the third dimension of the cells. They permit the normalization of the images through logarithmic processing of the original grey level infonnation. The automatic extraction of cell limits allows to link the image analysis and treatments with image synthesis methods by minimal human intervention. The surface representation and the registered images provide an ultrastructural data base from which quantitative 3-D morphological parameters, as well as otherwise impossible visualizations, can be computed. This 3-D image processing named C.A.V.U.M. for Computer Aided Volumic Ultra-Microscopy offers a new tool for the documentation and analysis of cell ultrastructure and for 3-D morphometric studies at EM magnifications. Further, a virtual observer can be computed in such a way as to simulate a visit of the reconstructed object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a review is first given of how confocal fluorescence microscopy in combination with digital image processing can be utilized to present three-dimensional images of microscopic specimens. Examples of different display techniques are given as well as data on typical computing speeds when using a SUN 386i workstation. In the latter part of the paper we discuss signal quality and factors limiting the thickness of the specimen volume that can be studied. Multiple-wavelength registration is also described, as well as a method for eliminating detector "cross-talk" when recording two fluorophores with overlapping emission spectra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a different way of processing fluorescence images with median type filters is presented. The use of linear median hybrid filtering techniques in fluorescence microscopy is shown to be very promising in trying to achieve both increased recording speed and decreased photo damage to the biological sample. It is also shown that these linear median filtering methods can tuned for both confocal and conventional fluorescence microscopy. The parallelization of such a process is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Near-infrared light may be detected in transillumination through several centimetres of tissue. Spectral changes in this light are routinely used for globally monitoring blood volume and oxygenation in the brain of newborn infants and observing the enhanced vascularity surrounding tumours in the breast. The imaging problem may be identified as the inversion of strongly multiply scattered light. The reconstruction method previously proposed is an iterative one requiring a sophisticated forward model and an analysis of the ill-posedness at each stage. The experimental arrangement for studies on a cylindrical phantom takes measurements at 32 equally angularly spaced locations for each of 32 similarly arranged input locations. The phantom that has been developed allows the values of the scattering coefficient (μs), absorption coefficient (μa), and angular scattering probability function f(s,s') [direction s' -> s] to be independently controlled. Absorbing objects are placed in the phantom to produce inhomogeneous data. Experimental results are compared with both Monte-Carlo and Finite Element simulations. The experimental and simulated data may be collected in both continuous and time-resolved form. The former shows a definite spatial variation with phantom inhomogeneity. The latter may afford additional information, but is limited at present by the poor signal-to-noise ratio available from the insirumentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method is described for the computer simulation of quantum mottle in digital angiographic images obtained through an image intensifier (II) based system. The model corrupts a "perfect" image-one taken at high exposure levels-with Poisson distributed noise to simulate an image obtained through a lower x-ray dose. A mapping scheme is employed which effectively correlates gray level intensities at the image display to photon fluence at the front end of the II. The utility of the noise model is demonstrated by using it to simulate the effect of variable x-ray exposure conditions on an angiographic sequence. Such a sequence is valuable in the development of temporal filtering techniques for digital angiography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the tomographic imaging of time-varying distributions, when the temporal variation during acquisition of the data is high, precluding Nyquist rate sampling. This paper concentrates on the open (and hitherto unstudied) problem of nonperiodic temporal variation, which cannot he reduced to the time-invariant case by synchronous acquisition. The impact of the order of acquisition of different views on the L2 norm of the image-domain reconstruction error is determined for band-limited temporal variation. Based on this analysis, a novel technique for lowering the sampling rate requirement while preserving image quality is proposed and investigated. This technique involves an unconventional projection sampling order which is designed to minimize the L2 image-domain reconstruction error of a representative test image. A computationally efficient design procedure reduces the image data into a Grammian matrix which is independent of the sampling order. Further savings in the design procedure are realized by using a Zernike polynomial series representation for the test image. To illustrate the approach, reconstructions of a computer phantom using the best and conventional linear sampling orders are compared, showing a seven-fold decrease in the error norm by using the best scheme. The results indicate the potential for efficient acquisition and tomographic reconstruction of time-varying data. Application of the techniques are foreseen in X-ray computer tomography and magnetic resonance imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Noise in spin echo (SE) images in MRI consists of random and structured components Structured noise arises mainly from the non-uniformity of the B1 field. We have studied these errors using a cylindrical phantom. Random noise analysis by subtraction of a 4,9, or 25 points smoothed image from the original image showed small but significant differences between the three methods, though it was uniform over the phantom and changed little with time over a span of 8 months. The rf coil nonuniformity ( coefficient of variation,CV) in two areas of interest (ROI). a central area (40 cm2) and a ring outside the central region (130 cm2) was measured to be 21% and 45% respectively for the saddle coil and 1 5% and 6.0% respectively for the bird cage coil Using the first day flood image for correction of the later images. CV in the two ROI's of the corrected images were 1.3% and 2.2% for the saddle coil respectively and 0.9% and 1.6% respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This contribution describes the development of a fast two line scan detector for digital subtraction angiography (DSA). For each line an input phosphor screen, an image intensifier and a photodiode array are successively coupled by a special fiber optic. A line comprises 250 pixels having a size of 0.5 .0.5mm2 each. The vertical center to center distance of the two input lines is 2mm. Different materials for the phosphor lines such as Gd2O2S : Tb and CdWO4 were tested. The 3 stage proximity focusing image intensifiers have a light intensity amplification of 120W/W. Fast readout electronics enable a minimum integration time of 2ms for the photodiode array. For this time the measured dynamic range of the detector is 3000. It can be improved by increasing the readout time. Details of the existing detector will be given. The tested detector was optimized for the application in non-invasive coronary angiography with synchrotron radiation (System NIKOS II). However, its modular design permits an optimization for other applications as well. By individual selection of the detector components, for example the spatial resolution and the shape of the input line can be adapted to specific problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the initial steps in the analysis of 3-D/4-D images is Segmentation, which entails partitioning the images into relevant subsets such as object and background. In this paper, we present a multidimensional segmentation algorithm to extract object surfaces from Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) scans. The algorithm is formulated in the framework of blackboard model and uses Mathematical Morphology. We propose the Generalized Morphological operators( which are used as Knowledge Sources) for segmentation in multidimensions. Apriori knowledge of the approximate location of the object surface is communicated to the algorithm via the definition of the Search Space. The algorithm uses this definition of the Search Space to obtain the Surface Candidate elements. The search space specification reduces the computational cost and increases the reliability of the detected features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In our efforts to devise a semi-automated technique for extracting the left ventricular (LV) chamber from a stack of cardiac X-ray CT images, we found that individual regions within the imagery contained blurred surfaces and were corrupted by noise and other small artifacts. To reduce these degradations, we explored image enhancement techniques. While many nonlinear edge-preserving smoothing filters have been proposed for such situations in two dimensions, we have found that the most suitable technique for our three-dimensional application is the maximum homogeneity filter. Unfortunately, previous implementations of the maximum-homogeneity filter used fixed ad hoc implementations, thereby limiting their utility. We have developed a three-dimensional generalization of the maximum homogeneity filter. This filter preserves and sharpens region surfaces. It also reduces random noise and small artifacts within uniform regions. We compare this method to 3-D versions of other popular edge-preserving smoothing filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new approach for extracting surface motion parameters from the left ventricle (LV) data. The data are obtained using biplane (stereo) cineangiography and provided by Dr. David Smith [23]. The data set consists of 3-D coordinates of 30 bifurcation points on the surface of LV through several time frames. If an object undergoes rigid motion, the standard motion parameters are the translation vector and rotation matrix. The above parameters are not sufficient to describe the nonrigid motion, of which the LV motion is an example. Hence, we define the local surface stretching as an additional (with global rotation and translation) motion parameter. The process of recovering the stretching factor from the angiography data consists of three steps. At the first step, the surface of LV is reconstructed at each time instant. The reconstruction procedure involves converting data into polar coordinate system. Then the surface is reconstructed by applying the relaxation (iterative averaging) algorithm in polar coordinates. During the second step we calculate Gaussian curvature at each bifurcation point at each time instant. This achieved by least-squares surface fitting in the window around each point of interest. The third step is actual stretching factor recovery, which is based on comparison of Gaussian curvatures before and after the motion. This formula was first suggested in [11]. The final results of the algorithm are the reconstructed LV surface at each time instant together with cumulative stretching curves for each given bifurcation point.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automatic digital image processing technique for vasomotion analysis in peripheral microcirculation at multiple sites simultaneously and in real time, is presented. The algorithm utilizes either fluorescent or bright field microimages of the vasculature as input. The video images are digitized and analyzed on-line by an IBM RT PC. Using digital filtering and edge detection, the technique allows simultaneous diameter measurement at more than one site. The sampling frequency is higher than 5 Hz when only one site is tracked. The performance of the algorithm is tested in the hamster cutaneous microcirculation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extraction of left ventricular endocardial and epicardial boundaries from digital two-dimensional echocardiography is essential in quantitative analysis of cardiac function. Automatic detection of these boundaries is difficult due to poor intensity contrast and noise inherent in ultrasonic images. In this paper, we present a new approach that employs fuzzy reasoning techniques to detect the boundaries automatically. In the proposed method, the image is firstly enhanced by applying the Laplasian-of- Gaussian edge detector. Secondly, the center of the left ventricle is determined automatically by analyzing the original image. Next, a search process radiated from the estimated center is performed to locate the endocardial boundary by using the zero-crossing points. After this step, the estimation of the range of radius of possible epicardial boundary is carried out by comparing the high-level knowledge of intensity changes along all directions with the actual image intensity changes. The high-level knowledge of global intensity change in the image is acquired from experts in advance and is represented in the form of fuzzy linguistic descriptions and relations. Knowledge of local intensity change can therefore be deduced from the knowledge of global intensity change through fuzzy reasoning. After the comparison, multiple candidate ranges as well as the grades of membership indicating confidence levels are obtained along each direction. The most consistent range in each direction is selected to guide the epicardial boundary search. Multiple candidate epicardial boundaries are then found by locating the zero-crossing points in the range. The one with best consistency is selected as the epicardial boundary. Both endocardial and epicardial boundaries are then smoothed based upon the radii of their spatial neighbors. The final boundaries are obtained by applying the cardinal spline interpolation algorithm. Since our approach is based on fuzzy reasoning techniques and takes global information into consideration, an accurate and smooth result is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The difficult task of segmentation of cytologic images requires careful examination of the data and the goal that wants to be achieved. In this taper three segmentation schemes are introduced, each geared for a specific type of data or type of analysis. Also a way for separating touching objects is shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Specular microscopy permits in vivo quantitative morphometric analysis of the corneal endothelium. Cell density and shape correlate closely with the ability of the corneal endothelium to dehydrate the corneal stroma and maintain the clarity of the cornea. Most investigators use manual cell tracing to obtain cell boundaries. The tedious tracing process may be facilitated using a planimeter and digitizer but it is quite subjective. Several papers have reported successful automation of corneal endothelial cell morphometric analysis using contact specular microscopy1'2'3'4. Non—contact specular microscopy has been more difficult to automate because of tear-film and epithelial cell reflections5. The non-contact specular image has a strong background gray scale gradient, low contrast, thick boundaries, and considerable extraneous nonboundary structure. The objective of this research was to develop an automated corneal endothelial cell morphometric analysis of images obtained by non-contact specular microscopy, and to compare the results obtained by the automated technique with manual cell tracing for speed and accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent developments in digital imaging and Picture Archiving and Communication Systems (PACS) allow physicians and radiologists to assess radiographic images directly in digital form through imaging workstations. The development of medical workstations was primarily oriented toward the development of a convenient tool for rapid display of images. In this project our goal was to design and evaluate a personal desktop workstation that will provide a large number of clinically useful image analysis tools. The hardware used is a standard Macintosh II interfaced to our existing PACS network through an Ethernet interface using standard TCP/IP communication protocols. Special emphasis was placed on the design of the user interface to allow clinicians with minimal or no computer manipulation skills to use complex analysis tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Supercomputer facilities have been applied to a problem in numerically intensive medical image processing. Magnetic Resonance Imaging (MRI) data was converted into a useful information product. The motivation for this work is the "information overload" that radiologists currently experience with the overwhelming amount of data that MRI scans produce. The work was encouraged by past success in using image processing on earth observation satellite programs. The objectives of this work were to determine if the source data, multiple MRI echos, could be converted into one tissue map and to assess the computational requirements. We found that vectorizing of numerically intensive kernels reduces CPU use by a factor of 2-3 times. Our initial experience with the application of fuzzy and ISODATA clustering analysis provides data dimension reduction, improved tissue specificity, and provides a more quantitative diagnostic tool for the radiologist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical researchers are seeking a method for detecting chromosomal abnormalities in unborn children without requiring invasive procedures such as anmiocentesis. Software has been developed to utilize a light microscope to detect fetal cells that occur with very low frequency in a sample of maternal blood. This rare event detection involves dividing a microscope slide containing a maternal blood sample into as many as 40,000 fields, automatically focusing on each field-of-view, and searching for fetal cells. Size and shape information is obtained by calculating a figure of merit through various binary operations and is used to discriminate fetal cells from noise and artifacts. Once the rare fetal cells are located, the slide is automatically rescanned to count the total number of cells on the slide. Binary operations and image processing hardware are used as much as possible to reduce the total amount of time to analyze one slide. Current runtime for scoring one full slide is about four hours, with motorized stage movement and focusing being the speed-limiting factors. Fetal cells occurring with a frequency of less than 1 in 200,000 maternal cells have been consistently found with this system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Measurements of terrestrial plant photosynthesis frequently exploit sensing of gas exchange from leaves enclosed in gas-tight, climate controlled chambers. These methods are typically slow, and do not resolve variation in photosynthesis below the whole leaf level. A photosynthesis visualization technique is presented that uses images of leaves employing light from chlorophyll (Chl) fluorescence. Images of Chl fluorescence from whole leaves undergoing steady-state photosynthesis, photosynthesis induction, or response to stress agents were digitized during light flashes that saturated photochemical reactions. Use of saturating flashes permitted deconvolution of photochemical energy use from biochemical quenching mechanisms (qN) that dissipate excess excitation energy, otherwise damaging to the light harvesting apparatus. Combination of the digital image frames of variable fluorescence with reference frames obtained from the same leaves when dark-adapted permitted derivation of frames in which grey scale represented the magnitude of qN. Simultaneous measurements with gas-exchange apparatus provided data for non-linear calibration filters for subsequent rendering of grey-scale "images" of photosynthesis. In several experiments significant non-homogeneity of photosynthetic activity was observed following treatment with growth hormones, or shifts in light or humidity, and following infection by virus. The technique provides a rapid, non-invasive probe for stress physiology and plant disease detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object of our study was to investigate the feasibility of improving liver Magnetic Resonance Imaging (MRI) specificity by subjecting liver MRI to quantitative analyses under pattern recognition framework. All imaging were performed with a 1.5 Tesla scanner. Four quantitative features were measured from each patient sample image showing the largest dimension of the lesion. K-Nearest-Neighbor (KNN) was applied for classification. Higher than 90 percent accuracy was achieved for differentiation between abnormal from normal. Less success was achieved in differentiation among different types of abnormalities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss various methods of tuning and improving the strength of the optical sectioning property of both brightfield and fluorescent confocal scanning microscopes. We consider techniques based on using different wavelengths, different sized detectors and pupil plane filters. Experimental results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.