PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper introduces a novel methodology to prognostics based on a dynamic wavelet neural network construct and notions from the virtual sensor area. This research has been motivated and supported by the U.S. Navy's active interest in integrating advanced diagnostic and prognostic algorithms in existing Naval digital control and monitoring systems. A rudimentary diagnostic platform is assumed to be available providing timely information about incipient or impending failure conditions. We focus on the development of a prognostic algorithm capable of predicting accurately and reliably the remaining useful lifetime of a failing machine or component. The prognostic module consists of a virtual sensor and a dynamic wavelet neural network as the predictor. The virtual sensor employs process data to map real measurements into difficult to monitor fault quantities. The prognosticator uses a dynamic wavelet neural network as a nonlinear predictor. Means to manage uncertainty and performance metrics are suggested for comparison purposes. An interface to an available shipboard Integrated Condition Assessment System is described and applications to shipboard equipment are discussed. Typical results from pump failures are presented to illustrate the effectiveness of the methodology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general method for tracking the evolution of hidden damage processes and predicting remaining useful life is presented and applied experimentally to an electromechanical system with a failing supply battery. The fundamental theory for the method is presented. In this theory, damage processes are viewed as occurring in a hierarchical dynamical system consisting of 'fast', directly observable subsystem coupled with a 'slow', hidden subsystem describing damage evolution. In the algorithm, damage tracking is achieved using a two-time-scale modeling strategy based on phase space reconstruction. Using the reconstructed phase space of the reference (undamaged) system, short-time predictive models are constructed. Fast-time data from later stages of damage evolution of a given system are collected and used to estimate the short time reference model prediction error or a tracking metric. The tracking metric is used as an input to a nonlinear recursive filter, the output of which provides an estimate of the current damage state. Estimates of remaining useful life are obtained recursively using the current damage state estimates under the assumption of a particular battery voltage evolution model. In the experimental application, the method is shown to accurately estimate both the battery state and the time to failure throughout the whole experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prognostics, which refers to the inference of an expected time-to-failure for a mechanical system, is made difficult by the need to track and predict the trajectories of real-valued system parameters over essentially unbounded domains, and by the need to prescribe a subset of these domains in which an alarm should be raised. In this paper we propose a novel technique whereby these problems are avoided: instead of physical system or sensor parameters, sensor-level test-failure probability vectors (bounded within the unit hypercube) are tracked; and via a close relationship with the TEAMS suite of modeling tools, the terminal states for all such vectors can be enumerated. To perform the tracking, a Kalman filter with associated interacting multiple model switching between failure regimes is proposed, and simulation results indicate that performance is promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wiring is the nervous system of any complex system and is attached to or services nearly every subsystem. Damage to optical wiring systems can cause serious interruptions in communication, command and control systems. Electrical wiring faults and failures due to opens, shorts, and arcing probably result in adverse effects to the systems serviced by the wiring. Abnormalities in a system usually can be detected by monitoring some wiring parameter such as vibration, data activity or power consumption. This paper introduces the mapping of wiring to critical functions during system engineering to automatically define the Failure Modes Effects and Criticality Analysis. This mapping can be used to define the sensory processes needed to perform diagnostics during system engineering. This paper also explains the use of Operational Modes and Criticality Effects Analysis in the development of Sentient Wiring Systems as a means for diagnostic, prognostics and health management of wiring in aerospace and transportation systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Personal Computers (PC) are getting cheaper because of fast development in Central Processing Unit (CPU) and Random Access Memory. Consequently the cost/performance ratio keeps on decreasing in the past 10 years. Three years ago, the cost of a 486 PC with 90 MHz CPU was about $DLR2500. Now we are able to purchase a Pentium 450 MHz PC that is at least 10 times faster than the 486 with essentially the same cost. It is now becoming realistic to use a PC for many real-time applications. This motivates us to develop PC based real- time Health Monitoring tools instead of special purpose DSP based tools. There are three major advantages of using PC- based HM tools.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to understand a system's behavior in both normal and failed conditions is fundamental to the design of error-tolerant systems as well as to the development of diagnostics. The System Analysis for Failure and Error Reduction (SAFER) Project seeks to provide designers with tools to visualize potential sources of error and their effects early in the design of human-machine systems. The project is based on an existing technology that provides a failure-space modeling environment, analysis capabilities for troubleshooting, and error diagnostics using design data of machine systems. The SAFER Project extends the functionality of the existing technology in two significant ways. First, by adding a model of human error probability within the tool, designers are able to estimate the probabilities of human errors and the effects that these errors may have on system components and on the entire system. Second, the visual presentation of failure-related measures and metrics has been improved through a process of user-centered design. This paper will describe the process that was used to develop the human error probability model and will present novel metrics for assessing failure within complex systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern systems such as nuclear power plants, the Space Shuttle or the International Space Station are examples of mission critical systems that need to be monitored around the clock. Such systems typically consist of embedded sensors in networked subsystems that can transmit data to central (or remote) monitoring stations. At Qualtech Systems, we are developing a Remote Diagnosis Server (RDS) to implement a remote health monitoring systems based on telemetry data from such systems. RDS can also be used to provide online monitoring of sensor-rich, network capable, legacy systems such as jet engines, building heating-ventilation-air-conditioning systems, and automobiles. The International Space Station utilizes a highly redundant, fault tolerant, software configurable, complex, 1553 bus system that links all major sub-systems. All sensor and monitoring information is communicated using this bus and sent to the ground station via telemetry. It is, therefore, a critical system and any failures in the bus system need to be diagnosed promptly. We have modeled a representative section of the ISS 1553 bus system using publicly accessible information. In this paper, we present our modeling and analysis results, and our Telediagnosis solution for monitoring and diagnosis of the ISS based on Telemetry data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We develop hardware and software for a long-term storage telemetry digital system. The system can be used for storing digitally in PCMCIA ATA flash memory card 33 analog channel data during several hours at low sample rate. The system is a portable unit powered by battery and contains 33 analog to digital converter, C/C++ programmable micro controller and PCMCIA memory. The proposed system could be reinstalled for up to 66 channels. The implemented unit is lightweight (about 1 pound). The unit records, converts, and stores the electric signals from sensors during the equipment operation. Some time after the flash memory will be downloaded at a commercial PC or a portable computer in a laboratory for diagnostic purposes. Data are stored in blocks of 512 bytes in PCMCIA memory (one sector). To optimize the memory available we used the compression technique based in wavelet functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The deregulation of energy markets, the ongoing advances in communication networks, the proliferation of intelligent metering and protective power devices, and the standardization of software/hardware interfaces are creating a dramatic shift in the way facilities acquire and utilize information about their power usage. The currently available power management systems gather a vast amount of information in the form of power usage, voltages, currents, and their time-dependent waveforms from a variety of devices (for example, circuit breakers, transformers, energy and power quality meters, protective relays, programmable logic controllers, motor control centers). What is lacking is an information processing and decision support infrastructure to harness this voluminous information into usable operational and management knowledge to handle the health of their equipment and power quality, minimize downtime and outages, and to optimize operations to improve productivity. This paper considers the problem of evaluating the capacity and reliability analyses of power systems with very high availability requirements (e.g., systems providing energy to data centers and communication networks with desired availability of up to 0.9999999). The real-time capacity and margin analysis helps operators to plan for additional loads and to schedule repair/replacement activities. The reliability analysis, based on computationally efficient sum of disjoint products, enables analysts to decide the optimum levels of redundancy, aids operators in prioritizing the maintenance options for a given budget and monitoring the system for capacity margin. The resulting analytical and software tool is demonstrated on a sample data center.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computing the diagnosability of a discrete-valued system (such as an avionics system), or conversely, a set of test vectors to efficiently determine system diagnosability, is a well-known task within the area of system diagnostics. There are a number of approaches that have been adopted for this task, and many tools have been developed and are available commercially. This article describes a new approach for this task, using techniques developed within the model-based diagnostics (MBD) community. The benefits of this new approach are: (1) the same model used for system design and analysis can be used for diagnosability testing; and (2) a diagnosability model (or set of test vectors) can be compiled from the MBD model, without having to have a model for design and one for diagnosability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the context of preventive health maintenance in complex engineering systems, novel sensor fault detection methodologies are developed for an aircraft auxiliary power unit. Promising results at operational and sensor failure conditions are obtained for temperature and pressure sensors. In the methodology proposed, first covariance and noise analyses of sensor data are performed. Next, auto- associative and hetero-associative neural networks for sensor validation are designed and trained. These neural networks are used together to provide validation for pressure and temperature sensors. The last step consists of development of detection and identification logic for sensor faults. In spite o high noise levels, the methodology is shown to be very robust. More than 90% correct sensor failure detection is achieved when noise on the order of noise inherently present in sensor readings is added.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bayesian networks have recently become a modeling technique of choice for development of flexible, accurate, and complex diagnostic systems. These characteristics are obtained, however, at the significant cost of data and expert knowledge. It is often the case that a troubleshooting flow diagram, the most popular way of representing troubleshooting procedures, is already available for the system and can be used as a starting point for design of the Bayesian network. It turns out that conversion of the flow diagram into a Bayesian network is very similar to conversion into a diagnostic case base. We compare the case base and Bayesian network obtained by conversion with the original flow diagram, from the point of view of their diagnostic performance. We also describe a procedure for cost and time efficient enhancement of the original case base and Bayesian network. We discuss the sequencing algorithms necessary to use case bases and Bayesian networks in troubleshooting, with particular attention to decision tree and Value of Information based sequencing. We have used our design procedure in development of several complex diagnostic systems for troubleshooting of satellites, vehicles, and test equipment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many systems are comprised of components equipped with self-testing capability; however, if the system is complex involving feedback and the self-testing itself may occasionally be faulty, tracing faults to a single or multiple causes is difficult. Moreover, many sensors are incapable of reliable decision-making on their own. In such cases, a signal processing front-end that can match inference needs will be very helpful. The work is concerned with providing an object-oriented simulation environment for signal processing and neural network-based fault diagnosis and prognosis. In the toolbox, we implemented a wide range of spectral and statistical manipulation methods such as filters, harmonic analyzers, transient detectors, and multi-resolution decomposition to extract features for failure events from data collected by data sensors. Then we evaluated multiple learning paradigms for general classification, diagnosis and prognosis. The network models evaluated include Restricted Coulomb Energy (RCE) Neural Network, Learning Vector Quantization (LVQ), Decision Trees (C4.5), Fuzzy Adaptive Resonance Theory (FuzzyArtmap), Linear Discriminant Rule (LDR), Quadratic Discriminant Rule (QDR), Radial Basis Functions (RBF), Multiple Layer Perceptrons (MLP) and Single Layer Perceptrons (SLP). Validation techniques, such as N-fold cross-validation and bootstrap techniques, are employed for evaluating the robustness of network models. The trained networks are evaluated for their performance using test data on the basis of percent error rates obtained via cross-validation, time efficiency, generalization ability to unseen faults. Finally, the usage of neural networks for the prediction of residual life of turbine blades with thermal barrier coatings is described and the results are shown. The neural network toolbox has also been applied to fault diagnosis in mixed-signal circuits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The failure rate of helicopters is 2.5 times that of fixed- winged aircraft and the lead time from fault development to complete failure is as short as 15 minutes. Thus, it is very crucial to warn pilots as much in advanced as possible. Here Canonical Discriminant Analysis technique was ap[plied to classify the various failure modes in helicopter gearbox. Experimental data was used to assess the performance of the proposed algorithms. Simulation results showed that the algorithms performed extremely well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a method for automatic constructing a fuzzy expert system from numerical data using the ILFN network and the Genetic Algorithm is presented. The Incremental Learning Fuzzy Neural (ILFN) network was developed for pattern classification applications. The ILFN network, employed fuzzy sets and neural network theory, is a fast, one-pass, on-line, and incremental nearing algorithm. After trained, the ILFN network stored numerical knowledge in hidden units, which can then be directly mapped into if-then rule bases. A knowledge base for fuzzy expert systems can then be extracted from the hidden units of the ILFN classifier. A genetic algorithm is then invoked, in an iterative manner, to reduce number of rules and select only important features of input patterns needed to provide to a fuzzy rule-based system. Three computer simulations using the Wisconsin breast cancer data set were performed. Using 400 patterns for training and 299 patterns for testing, the derived fuzzy expert system achieved 99.5% and 98.33% correct classification on the training set and the test set, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Predicting the clinical outcome prior to minimally invasive treatments for Benign Prostatic Hperlasia (BPH) cases would be very useful. However, clinical prediction has not been reliable in spite of multiple assessment parameters such as symptom indices and flow rates. In this study, Artificial Intelligence (AI) algorithms are used to train computers to predict the surgical outcome in BPH patients treated by TURP or VLAP. Our aim is to investigate whether AI can reproduce the clinical outcome of known cases and assist the urologist in predicting surgical outcomes. Four different AI algorithms are used.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the space shuttle ages, it is experiencing wiring degradation problems, including arcing, chaffing, insulation breakdown and broken conductors. A systematic and comprehensive test process is required to thoroughly test and QA the wiring systems. The NASA Wiring Integrity Reseach (WIRe) team recognized the value of a formal model based analysis for risk assessment and fault coverage analysis using our TEAMS toolset and commissioned a pilot study with QSI to explore means of automatically extracting high fidelity multisignal models from wiring information databases. The MEC1 Shuttle subsystem was the subject of this study. The connectivity and wiring information for the model was extracted from a Shuttle Connector Analysis Network (SCAN) electronic wirelist. Using this wirelist, QSI concurrently created manual and automatically generated wiring models for all wire paths associated with connector J3 on the MEC1 assembly. The manually generated model helped establish the rules of modeling. The complete MEC1 model was automatically generated based on these rules, thus saving significant modeling cost. The methodology is easily extensible to the entire shuttle wiring system. This paper presents our modeling and analysis results from the pilot study along with our proposed solutions to the complex issues of wiring integrity assessment problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper documents the general findings and recommendations of the Design for Safety Program's Study of the Space Shuttle Program's (SSP) Problem Reporting and Corrective Action (PRACA) System. The goals of this Study were; to evaluate and quantify the technical aspects of the SSP's PRACA systems, and to recommend enhancements addressing specific deficiencies in preparation for future system upgrades. The Study determined that the extant SSP PRACA systems accomplished a project level support capability through the use of a large pool of domain experts and a variety of distributed formal and informal database systems. This operational model is vulnerable to staff turnover and loss of the vast corporate knowledge that is not currently being captured by the PRACA system. A need for a Program-level PRACA system providing improved insight, unification, knowledge capture, and collaborative tools was defined is this study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In its past history, the United States Navy, has operated and performed maintenance utilizing either reactive or preventive maintenance philosophies. Recently, with the continual shrinking of resources, both monetary and personnel, the Navy has looked at various ways of reducing the workload and cumbersome work practices that its personnel have to perform. However, this is to be accomplished while maintaining its high level of readiness. [In fact, the CNO (ADM Vern Clark) has indicated the top five priorities for the Navy are manpower, current readiness, future readiness, quality of service, and Navy-wide alignment] 1 Due to these two requirements, the Navy has mandated a shift from its present maintenance philosophy, i.e. the Planned Maintenance System (PMS), to one that utilizes Condition Based Maintenance (CBM) and Reliability Centered Maintenance (RCM) principles. Simply put, the Navy wants to shift from a calendar based maintenance system, i.e. performing maintenance every so many days / months, to a maintenance system that is based upon the condition, performance and operation, of the equipment in question. To meet this objective, the Navy needed to apply condition-monitoring strategies for its ships' engineering equipment. The Navy chose to apply the Integrated Condition Assessment System (ICAS) to fill this requirement. ICAS has multiple applications that can be and are used by the Navy to help reduce workloads. However one of the key requirements that ICAS fills to enable CBM is the ability to trend machinery performance and diagnose machinery health. These two areas are key to the enabling of CBM within the Navy. With these tools, ICAS has the ability to turn the operational data that the new Machinery Control Systems (MCS) and the new sensor information systems provide into useful and useable information. This information can be used for the diagnosis of failures and the indications of possible future fault conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diagnosis and prognosis are processes of assessment of a system's health - past, present and future - based on observed data and available knowledge about the system. Due to the nature of the observed data and the available knowledge, the diagnostic and prognostic methods are often a combination of statistical inference and machine learning methods. The development (or selection) of appropriate methods requires appropriate formulation of the learning and inference problems that support the goals of diagnosis and prognosis. An important aspect of the formulation is modeling - relating the real system to its mathematical abstraction. The models, depending on the application and how well it is understood, can be either empirical or scientific (physics based). The expression of the model, too, tends to be statistical (probabilistic) to account for uncertainties and randomness. This paper explores the impact of diagnostic and prognostic goals on modeling and reasoning system requirements, with the purpose of developing a common software framework that can be applied to a large class of systems. In particular, the role of failure-dependency modeling in the overall decision problem is discussed. The applicability of Qualtech Systems' modeling and diagnostic software tools to the presented framework for both the development and implementation of diagnostics and prognostics is assessed. Finally, a potential application concept for advancing the reliability of Navy shipboard Condition Based Maintenance (CBM) systems and processes is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Westland set of empirical accelerometer helicopter data with seeded and labeled faults is analyzed with the aim of condition monitoring. The autoregressive (AR) coefficients from a simple linear model encapsulate a great deal of information in a relatively few measurements; and it has also been found that augmentation of these by harmonic and other parameters can improve classification significantly. Several techniques have been explored, among these restricted Coulomb energy (RCE) networks, learning vector quantization (LVQ), Gaussian mixture classifiers and decision trees. A problem with these approaches, and in common with many classification paradigms, is that augmentation of the feature dimension can degrade classification ability. Thus, we also introduce the Bayesian data reduction algorithm (BDRA), which imposes a Dirichlet prior on training data and is thus able to quantify probability of error in an exact manner, such that features may be discarded or coarsened appropriately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At Ford Motor Company, thrust bearing in drill motors is often damaged by metal chips. Since the vibration frequency is several Hz only, it is very difficult to use accelerometers to pick up the vibration signals. Under the support of Ford and NASA, we propose to use a piezo film as a sensor to pick up the slow vibrations of the bearing. Then a neural net based fault detection algorithm is applied to differentiate normal bearing from bad bearing. The first step involves a Fast Fourier Transform which essentially extracts the significant frequency components in the sensor. Then Principal Component Analysis is used to further reduce the dimension of the frequency components by extracting the principal features inside the frequency components. The features can then be used to indicate the status of bearing. Experimental results are very encouraging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, a series of aviation component health projects have employed a Robust Laser Interferometer. These projects have included turbine engine seeded fault testing at Pratt and Whitney, rotorcraft gearbox measurements in Sikorsky test cells, and rotorcraft gearbox and hanger bearing measurements in U.S. Navy test facilities such as those at Patuxent River, Maryland. Augmenting investigations have also been undertaken.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Morlet wavelet distribution is a time frequency distribution that can be described as a convolution of the wavelet with a vibration signal for various scales or frequencies. The Morlet wavelet, which is a Gaussian windowed complex sine wave, analysis has several subtle programming implications that both relate to and differentiate it from the short time Fourier transform. These are described, discussed, and tested on machinery vibration signals with positive results. Traditional scale with its octave representation is discarded in favor of equally spaced frequencies. A window width factor is tested to emphasize precision in either time or frequency. A variable length exponential window is necessary as a function of frequency and the width factor. The analysis is coded in MATLAB efficiently using their `conv' algorithm, and results of applying it to machinery diagnostic vibration signals are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a review of different real-time capable algorithms to detect and isolate component failures in large-scale systems in the presence of inaccurate test results. A sequence of imperfect test results (as a row vector of 1's and 0's) are available to the algorithms. In this case, the problem is to recover the uncorrupted test result vector and match it to one of the rows in the test dictionary, which in turn will isolate the faults. In order to recover the uncorrupted test result vector, one needs the accuracy of each test. That is, its detection and false alarm probabilities are required. In this problem, their true values are not known and, therefore, have to be estimated online. Other major aspects in this problem are the large-scale nature and the real-time capability requirement. Test dictionaries of sizes up to 1000 x 1000 are to be handled. That is, results from 1000 tests measuring the state of 1000 components are available. However, at any time, only 10-20% of the test results are available. Then, the objective becomes the real-time fault diagnosis using incomplete and inaccurate test results with online estimation of test accuracies. It should also be noted that the test accuracies can vary with time --- one needs a mechanism to update them after processing each test result vector. Using Qualtech's TEAMS-RT (system simulation and real-time diagnosis tool), we test the performances of 1) TEAMS-RT's built-in diagnosis algorithm, 2) Hamming distance based diagnosis, 3) Maximum Likelihood based diagnosis, and 4) Hidden Markov Model based diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While most research attention has been focused on fault detection and diagnosis, much less research effort has been dedicated to `general' failure accommodation. Due to the inherent complexity of nonlinear systems, most of model- based analytical redundancy fault diagnosis and accommodation studies deal with the linear system that is subject to simple additive or multiplicative faults. This assumption has limited the effectiveness and usefulness in practical applications. In this research work, the on-line fault accommodation control problems under catastrophic system failures are investigated. The main interest is focused on dealing with the unanticipated system component failures in the most general formulation. Through discrete- time Lyapunov stability theory, the necessary and sufficient conditions to guarantee the system on-line stability and performance under failures are derived and a systematic procedure and technique for proper fault accommodation under the unanticipated failures are developed. A complete architecture of fault diagnosis and accommodation has also been presented by incorporating the developed intelligent fault tolerant control scheme with a cost-effective fault detection scheme and a multiple-model based failure diagnosis process to efficiently handle the false alarms and the accommodation of both the anticipated and unanticipated failures in on-line situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present and compare different fault diagnosis algorithms using space space models for nonlinear dynamic systems. Most fault diagnosis and isolation algorithms for dynamic systems, which can be modeled using a set of state space equations, have relied on the system being linear and the noise and disturbances being Gaussian. In such cases, optimal filtering ideas based on Kalman filtering are utilized in estimation followed by a residual analysis, for which whiteness tests are typically carried out. Linearized approximations (e.g., Extended Kalman filters) have been used in the nonlinear dynamic systems case. However, linearization techniques, being approximate, tend to suffer from poor detection or high false alarm rates. In this paper, we use the sequential Monte Carlo filtering approach where the complete posterior distribution of the estimates are represented through samples or particles as opposed to the mean and covariance of an approximated Gaussian distribution. The particle filter is combined with the innovation-based fault detection techniques to develop a fault detection and isolation scheme. The advantage of particle filters is that they are capable of handling any functional nonlinearity and system or measurement noise of any distribution. An improvement on using a single Extended Kalman filter matched to a particular model is to use the the Interacting Multiple Model estimator, which consists of a number of EKFs running in parallel. Such a multiple model estimator can handle the abrupt changes in the system dynamics, which is essential for fault diagnosis. Here, we compare the fault detection performances of these algorithms on different nonlinear systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a model-based approach to diagnosis of hybrid systems. We have developed a combined qualitative-quantitative diagnosis scheme that uses hybrid models of the system and a model of the supervisory controller. By applying the supervisory controller model to diagnostic analysis we significantly cut down on the complexity in tracking behaviors, and in generating and refining hypotheses across discrete mode changes in the system behavior. We present the algorithms for hybrid diagnosis: hypotheses generation by back propagation, and hypotheses refinement by forward propagation and parameter estimation. Example scenarios demonstrate the effectiveness of this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of the paper is to introduce a description of a rotating machinery operation in variable conditions. The description was based on a dynamic scene idea. A diagnostic research of the rotating machinery conducted in variable conditions of its operation lets us to observe reaction of machinery on different kinds of excitations. An example of this research is run up or run down of machinery. In this case, the frequency range of excitations is very wide. Results of this research are great source of information on technical state of the observed machinery as well as changes of its technical state. Interpretation of time-frequency analysis results is nowadays based on the visual manners of their estimation. The paper deals with a way of description of the machinery operation in dynamic scene form, which was based on results of a time-frequency analysis of vibration signals recorded during machinery operation in variable conditions. The proposed manner of that description can be basis for automatic identification of technical state of the observed machinery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.