Non-negative matrix factorization of an input data matrix into a matrix of basis vectors and a matrix of encoding coefficients
is a subspace representation method that has attracted attention of researches in pattern recognition in the recent period. We
have explored crucial aspects of NMF on massive recognition experiments with the ORL database of faces which include
intuitively clear parts constituting the whole. Using a principal changing of the learning stage structure and by formulating
NMF problems for each of a priori given parts separately, we developed a novel modular NMF algorithm. Although this
algorithm provides uniquely separated basis vectors which code individual face parts in accordance with the parts-based
principle of the NMF methodology applied to object recognition problems, any significant improvement of recognition
rates for occluded parts, predicted in several papers, was not reached. We claim that using the parts-based concept in NMF
as a basis for solving recognition problems with occluded objects has not been justified.
In recent years non-negative factorization (NMF) methods of a reduced image data representation attracted the
attention of computer vision community. These methods are considered as a convenient part-based representation
of image data for recognition tasks with occluded objects. In the paper two novel modifications of the NMF
are proposed which utilize the matrix sparseness control used by Hoyer. We have analyzed the influence of
sparseness on recognition rates (RR) for various dimensions of subspaces generated for two image databases. We
have studied the behaviour of four types of distances between a projected unknown image object and feature
vectors in NMF-subspaces generated for training data. For occluded ORL face data, Euclidean and diffusion
distances perform better than Riemannian ones, not following the overall expactation that Euclidean metric is
suitable only for orthogonal basis vectors. In the case of occluded USPS digit data, the RR obtained for the
modified NMF algorithm show in comparison to the conventional NMF algorithms very close values for all four
distances over all dimensions and sparseness constraints. In this case Riemannian distances provide higher RR
than Euclidean and diffusion ones. The proposed modified NMF method has a relevant computational benefit,
since it does not require calculation of feature vectors which are explicitly generated in the NMF optimization
process.
Erythropoietin (Epo) is a hormone which can be misused as a doping substance. Its detection involves analysis of images containing
specific objects (bands), whose position and intensity are critical for doping positivity. Within a research project of the World Anti-Doping Agency (WADA) we are implementing the GASepo software that should serve for Epo testing in doping control laboratories world-wide. For identification of the bands we have developed a segmentation
procedure based on a sequence of filters and edge detectors. Whereas all true bands are properly segmented, the procedure generates a relatively high number of false positives (artefacts). To separate these artefacts we suggested a post-segmentation supervised classification using real-valued geometrical measures of objects.
The method is based on the ID3 (Ross Quinlan's) rule generation method, where fuzzy representation is used for linking the linguistic terms to quantitative data. The fuzzy modification of the ID3 method provides a framework that generates fuzzy decision trees, as well as fuzzy sets for input data. Using the MLTTM software (Machine Learning Framework) we have generated a set of fuzzy rules explicitly describing bands and artefacts. The method eliminated most of the artefacts. The contribution includes a comparison of the obtained misclassification errors to the errors produced by some other statistical classification methods.
The function of image processing systems usually consists of three basic steps: (i) image acquisition, (ii) image processing, and (iii) output of the results with possible intermediate image data presentation. 'Off the shelf' architectures, used, e.g., in the standard personal computers, achieve the necessary bus performance in the required real time only in few inline applications. Highly innovative bus systems with pipeline architectures are therefore needed for ensuring huge data throughput in the specifically oriented optical quality inspection systems. The imaging systems for quality inline inspection have to meet specific requirements on data acquisition and communications, mainly: -to ensure a connection of various sensors of heterogeneous nature, such as area scan cameras, line scan cameras, fast CMOS cameras with selectable readout capabilities, -to incorporate carefully designed high speed communication buses between the processing nodes and the sensor adapters, -to manifest a good balance between real time behavior, flexibility, performance and cost efficiency. We have developed innovative real time communication buses for the real-time imaging system designed in the ARC Seibersdorf research Ltd. and used in optical quality inspection of printed matter. In this paper we address in details: -the hardware layout and the individual functions of the innovative buses, -the evaluation of the behavior of the buses in a quality inspection system with extreme real-time demands. All benefits are explained and numerically documented that can be of interest for reader dealing with other applications.
KEYWORDS: Digital signal processing, Computer programming, Software development, Image processing, Signal processing, Clocks, Real time imaging, Algorithm development, Real time image processing, Imaging systems
Although the hardware platform is often seen as the most important element of real-time imaging systems, software optimization can also provide remarkable reduction of overall computational costs. The recommended code development flow for digital signal processors based on the TMS320C6000(TM) architecture usually involves three phases: development of C code, refinement of C code, and programming linear assembly code. Each step requires a different level of knowledge of processor internals. The developer is not directly involved in the automatic scheduling process. In some cases, however, this may result in unacceptable code performance. A better solution can be achieved by scheduling the assembly code by hand. Unfortunately, scheduling of software pipelines by hand not only requires expert skills but is also time consuming, and moreover, prone to errors. To overcome these drawbacks we have designed an innovative development tool - the Software Pipeline Optimization Tool (SPOT(TM)). The SPOT is based on visualization of the scheduled assembly code by a two-dimensional interactive schedule editor, which is equipped with feedback mechanisms deduced from analysis of data dependencies and resource allocation conflicts. The paper addresses optimization techniques available by the application of the SPOT. Furthermore, the benefit of the SPOT is documented by more than 20 optimized image processing algorithms.
Some technical applications need a fast and reliable OCR for critical circumstances like low resolution and poor contrast. A concrete example is the real-time quality inspection system of Austrian banknotes. One requirement to the system is that it has to read two serial numbers on each banknote and to check if they are identical To solve the problem we have developed a novel method based an idea similar to pattern matching. However, instead of comparing entire images we use reduced sets of pixels, one for each different numeral. The detection is performed by matching these pixel sets with the corresponding pixels in the image being analyzed. We present an algorithm based on two cost functions that computes in a reasonable time the reduced pixel sets from a given set of image templates. The efficiency of our OCR has been increased considerably by introducing an appropriate set of image preprocessing operations. These are tailored especially to images with low resolution and poor contrast, bu they are simple enough to allow a fast real-time implementation. They can be seen as a normalization step that improves the image properties which are essential for pattern matching.
Matching of a reference template with an image is a computationally expensive job. Particularly in fast real-time applications, large images and search ranges led to serious implementation problems. Therefore a reduction of the template size achieved by the selection of an appropriate subtemplate which is used for point correlation (subtemplate matching) may significantly decrease computational cost. In this paper a modified algorithm of the subtemplate point selection is proposed and explored. With the additional use of image pyramids, we can reduce the computational costs even further. The algorithm starts with a coarse search grid in the top level of the image pyramid generated for the full intended resolution. The procedure continues until the lowest level of the pyramid, the original image, is reached. The computational costs of this algorithm part satisfy the requirement for on- line processing. The preparation of the subtemplate for the point correlation is carried out in off-line mode, i.e., there is no rigorous limit of computational costs. The technique that applies the point correlation to image template matching within the image pyramid concept is proposed and the results obtained are discussed. It is especially useful for fast real- time system implementation when a large number of template matchings are needed in the same image.
The major goal of this survey is to provide the reader with the motivation of image filtering and segmentation in diagnostic imaging, with the brief overview of the state-of- the-art of nonlinear filters based on the geometry-driven diffusion (GDD), and with a possible generalization of the GDD-filtering towards the complex problem of image segmentation, stated as minimization of particular energy functionals. An example of the application of the GDD- filtering to the task of 3D visualization of MRI data of the brain is illustrated and discussed in the paper.
KEYWORDS: Diffusion, Image filtering, Brain, Magnetic resonance imaging, Head, Nonlinear filtering, Neuroimaging, Modulation, Digital filtering, Signal to noise ratio
The paper deals with geometry-driven diffusion filtering of MR tomograms of human brain. A method for making the conductance function locally adjustable has been proposed. It is based on using quantitative measures of pixel dissimilarities in neighborhoods which control the selection of an appropriate parameter of the exponential conductance. Three kinds of dissimilarity have been proposed and tested on a MR head phantom, as well as on real MR tomogram. A study of separate filtering effect on region interiors and boundaries has been performed using histograms of local conductance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.