As sensors are increasingly deployed in locations removed from mains power and increasingly expected to operate for
times that are long compared to battery lifetimes we look to means for "harvesting" or "scavenging" energy from the
sensors' operating environments. Whereas many sensors are "parametric" - their interaction with the environment causes
a change in one or more of their electrical parameters - many other are true transducers - they perform their sensing
function by extracting energy from their environment. These kinds of sensors can thus serve - under suitable operating
conditions - both as measuring devices and as power supplies. In this paper we review this background, review the
fundamental restrictions on our ability to extract energy from the environment, enumerate and summarize sensing
principles that are promising candidates to double as power supplies, and provide several examples that span the range
from already off-the-shelf at low cost to in laboratory prototype stage to sufficiently speculative that there might be
reasonable doubt regarding whether they can actually work even in principle. Possibilities examined across this spectrum
include thermal noise, ambient RF scavenging (briefly), thermoelectricity, piezoelectricity, pyroelectricity, and
electrochemistry, especially including electrochemistry facilitated by microorganisms.
Medical ultrasound typically deals with the interior of the patient, with the exterior left to the original medical
imaging modality, direct human vision. For the human operator scanning the patient, the view of the external anatomy is
essential for correctly locating the ultrasound probe on the body and making sense of the resulting ultrasound images in
their proper anatomical context. The operator, after all, is not expected to perform the scan with his eyes shut. Over the
past decade, our laboratory has developed a method of fusing these two information streams in the mind of the operator,
the Sonic Flashlight, which uses a half silvered mirror and miniature display mounted on an ultrasound probe to produce
a virtual image within the patient at its proper location. We are now interested in developing a similar data fusion
approach within the ultrasound machine itself, by, in effect, giving vision to the transducer. Our embodiment of this
concept consists of an ultrasound probe with two small video cameras mounted on it, with software capable of locating
the surface of an ultrasound phantom using stereo disparity between the two video images. We report its first successful
operation, demonstrating a 3D rendering of the phantom's surface with the ultrasound data superimposed at its correct
relative location. Eventually, automated analysis of these registered data sets may permit the scanner and its associated
computational apparatus to interpret the ultrasound data within its anatomical context, much as the human operator does
today.
We have developed a new image-based guidance system for microsurgery using optical coherence tomography
(OCT), which presents a virtual image in its correct location inside the scanned tissue. Applications include surgery of
the cornea, skin, and other surfaces below which shallow targets may advantageously be displayed for the naked eye or
low-power magnification by a surgical microscope or loupes (magnifying eyewear). OCT provides real-time highresolution
(3 micron) images at video rates within a two or more millimeter axial range in soft tissue, and is therefore
suitable for guidance to various shallow targets such as Schlemm's canal in the eye (for treating Glaucoma) or skin
tumors. A series of prototypes of the "OCT penlight" have produced virtual images with sufficient resolution and
intensity to be useful under magnification, while the geometrical arrangement between the OCT scanner and display
optics (including a half-silvered mirror) permits sufficient surgical access. The two prototypes constructed thus far have
used, respectively, a miniature organic light emitting diode (OLED) display and a reflective liquid crystal on silicon
(LCoS) display. The OLED has the advantage of relative simplicity, satisfactory resolution (15 micron), and color
capability, whereas the LCoS can produce an image with much higher intensity and superior resolution (12 micron),
although it is monochromatic and more complicated optically. Intensity is a crucial limiting factor, since light flux is
greatly diminished with increasing magnification, thus favoring the LCoS as the more practical system.
The design of the first Real-Time-Tomographic-Holography (RTTH) optical system for augmented-reality applications
is presented. RTTH places a viewpoint-independent real-time (RT) virtual image (VI) of an object
into its actual location, enabling natural hand-eye coordination to guide invasive procedures, without requiring
tracking or a head-mounted device. The VI is viewed through a narrow-band Holographic Optical Element
(HOE) with built-in power that generates the largest possible near-field, in-situ VI from a small display chip
without noticeable parallax error or obscuring direct view of the physical world. Rigidly fixed upon a medical-ultrasound
probe, RTTH could show the scan in its actual location inside the patient, because the VI would
move with the probe. We designed the image source along with the system-optics, allowing us to ignore both
planer geometric distortions and field curvature, respectively compensated by using RT pre-processing software
and attaching a custom-surfaced fiber-optic-faceplate (FOFP) to our image source. Focus in our fast, non-axial
system was achieved by placing correcting lenses near the FOFP and custom-optically-fabricating our volume-phase
HOE using a recording beam that was specially shaped by extra lenses. By simultaneously simulating and
optimizing the system's playback performance across variations in both the total playback and HOE-recording
optical systems, we derived and built a design that projects a 104x112 mm planar VI 1 m from the HOE using
a laser-illuminated 19x16 mm LCD+FOFP image-source. The VI appeared fixed in space and well focused.
Viewpoint-induced location errors were <3 mm, and unexpected first-order astigmatism produced 3 cm (3% of
1 m) ambiguity in depth, typically unnoticed by human observers.
A stereoscopic display based on the viewing of two eye-multiplexed co-planar images correlated by perspective disparity exhibits a three-dimensional lattice of finite-sized volume elements -- virtual voxels -- and corresponding depth planes whose number, global and individual shapes, and spatial arrangement all depend on the number, shape, and arrangement of the pixels in the underlying planar display and on the viewer's interocular distance and viewing geometry relative to the display. This paper illustrates the origin and derives the quantitative geometry of the virtual voxel lattice, and relates these to the quality of the display likely to be perceived and reported by a typical viewer.
To allow multiple viewers to see the correct perspective and to provide a single viewer with motion parallax cues during head movement, more than two views are needed. Since it is prohibitive to acquire, process, and transmit a continuum of views, it would be preferable to acquire only minimal set of views and to generate intermediate images by using the estimated disparities. For high quality of the generated image, the first, we propose how to generate the intermediate images using multi-resolution and irregular-quadtree decomposition. Irregular-quadtree decomposition is aligned at the object boundary which is the disparity discontinuity. By finding the peak over the absolute values of the high pass filtered output that is applied to the row and column average, the horizontal and vertical dividing locations of the block are computed. The second, regions of occlusion are decided by similarity comparisons among the matched block alternatives, then filled with the pixels of left or right image by the principles we proposed. Finally, the images at arbitrary viewpoints of generated and yielding a 31.1 dB PSNR at middle location between both viewpoints.
Intelligent vehicles are beginning to appear on the market, but so far their sensing and warning functions only work on the open road. Functions such as runoff-road warning or adaptive cruise control are designed for the uncluttered environments of open highways. We are working on the much more difficult problem of sensing and driver interfaces for driving in urban areas. We need to sense cars and pedestrians and curbs and fire plugs and bicycles and lamp posts; we need to predict the paths of our own vehicle and of other moving objects; and we need to decide when to issue alerts or warnings to both the driver of our own vehicle and (potentially) to nearby pedestrians. No single sensor is currently able to detect and track all relevant objects. We are working with radar, ladar, stereo vision, and a novel light-stripe range sensor. We have installed a subset of these sensors on a city bus, driving through the streets of Pittsburgh on its normal runs. We are using different kinds of data fusion for different subsets of sensors, plus a coordinating framework for mapping objects at an abstract level.
Overlaid stereo image pairs, viewed without stereo demultiplexing optics, are not always perceived as a ghosted image: if image generation and display parameters are adjusted so that disparities are small and limited to foreground and background regions, then the perception is rather more of blurring than of doubling. Since this blurring seems natural, comparable to the blurring due to depth-of-focus, it is unobjectionable. In contrast, the perception of ghosting seems always to be objectionable. Now consider the possibility that there is a perceptual regime in which disparity is small enough that perception of crosstalk is as blurring rather than as ghosting, but it is large enough to stimulate depth perception. If such a perceptual region exists, then it might be exploited to relax the strict 'crosstalk minimization' requirement normally imposed in the engineering of stereoscopic displays. This paper reports experiments that indicate that such a perceptual region does actually exist. We suggest a stereoscopic display engineering design concept that illustrates how this observation might be exploited to create a zoneless autostereoscopic display. By way of introduction and motivation, we begin from the observation that, just as color can be shouted in primary tones or whispered in soft pastel hues, so stereo can be shoved in your face or raised ever so gently off the screen plane. We review the problems with 'in your face stereo,' we demonstrate that 'just enough reality' is both gentle and effective in achieving stereoscopy's fundamental goal: resolving the front-back ambiguity inherent in 2D projections, and we show how this perspective leads naturally to the relaxation of the requirement for crosstalk reduction to be the main engineering constraint on the design of stereoscopic display systems.
We demonstrate that the binocular perspective disparity generated by an interocular separation that is only a few percent of the nominal 65 mm human interocular separation is still enough to stimulate depth perception. This perception, which we call microstereopsis, has a 'kinder gentler' character than the stark and stressful stimulus presented by geometrically correct virtual reality displays. Microstereopsis stimulates 'just enough reality:' enough to resolve the depth ambiguity in flat images, but not so much reality that it hurts. We observe that whereas crosstalk between left and right image channels is normally perceived as ghosting, with microstereopsis it is perceived as blur in the foreground and background. Since ghosting is objectionable, whereas blur that looks like depth-of-focus in not objectionable, this relaxes the requirement for a high contrast ratio between on and off states of the stereo view multiplexer. This relaxation in turn suggest possibilities for zoneless autostereoscopic displays. We propose a realization based on an electronically toggled louvre filter using suspended particle display technology.
Not only binocular perspective disparity, but also many secondary binocular and monocular sensory phenomena, contribute to the human sensation of depth. Binocular perspective disparity is notable as the strongest depth perception factor. However means for creating if artificially from flat image pairs are notorious for inducing physical and mental stresses, e.g., 'virtual reality sickness'. Aiming to deliver a less stressful 'kinder gentler stereo (KGS)', we systematically examine the secondary phenomena and their synergistic combination with each other and with binocular perspective disparity. By KGS we mean a stereo capture, rendering, and display paradigm without cue conflicts, without eyewear, without viewing zones, with negligible 'lock-in' time to perceive the image in depth, and with a normal appearance for stereo-deficient viewers. To achieve KGS we employ optical and digital image processing steps that introduce distortions contrary to strict 'geometrical correctness' of binocular perspective but which nevertheless result in increased stereoscopic viewing comfort. We particularly exploit the lower limits of interoccular separation, showing that unexpectedly small disparities stimulate accurate and pleasant depth sensations. Under these circumstances crosstalk is perceived as depth-of-focus rather than as ghosting. This suggests the possibility of radically new approaches to stereoview multiplexing that enable zoneless autostereoscopic display.
Parallax-barrier panoramagrams (PPs) can present high- quality autostereoscopic images viewable from different perspectives. The limiting factor in constructing PP computer displays is the display resolution. First, we suggest a new PP display based on time multiplexing in addition to the usual space multiplexer; the barriers move horizontally in front of the display plane. The time multiplexing increases the horizontal resolution. In addition, it permits us to use wider barriers than are acceptable for static displays. We then analyze these displays, showing that wide-barrier PPs have advantages relating to depth-resolution and smoothness, and we present a novel algorithm for rendering the images on a computer.
We describe a new low-level scheme to achieve high definition 3D-stereoscopy within the bandwidth of the monoscopic HDTV infrastructure. Our method uses a studio quality monoscopic high resolution color camera to generate a transmitted `main stream' view, and a flanking 3D- stereoscopic pair of low cost, low resolution monochrome camera `outriggers' to generate a depth map of the scene. The depth map is deeply compressed and transmitted as a low bandwidth `auxiliary stream'. The two streams are recombined at the receiver to generate a 3D-stereoscopic pair of high resolution color views from the perspectives of the original outriggers. Alternately, views from two arbitrary perspectives between (and, to a limited extent, beyond) the low resolution monoscopic camera positions can be synthesized to accommodate individual viewer preferences. We describe our algorithms, and the design and outcome of initial experiments. The experiments begin with three NTSC color images, degrade the outer pair to low resolution monochrome, and compare the results of coding and reconstruction to the originals.
Compression and interpolation each require, given part of an image, or part of a collection or stream of images, being able to predict other parts. Compression is achieved by transmitting part of the imagery along with instructions for predicting the rest of it; of course, the instructions are usually much shorter than the unsent data. Interpolation is just a matter of predicting part of the way between two extreme images; however, whereas in compression the original image is known at the encoder, and thus the residual can be calculated, compressed, and transmitted, in interpolation the actual intermediate image is not known, so it is not possible to improve the final image quality by adding back the residual image. Practical 3D-video compression methods typically use a system with four modules: (1) coding one of the streams (the main stream) using a conventional method (e.g., MPEG), (2) calculating the disparity map(s) between corresponding points in the main stream and the auxiliary stream(s), (3) coding the disparity maps, and (4) coding the residuals. It is natural and usually advantageous to integrate motion compensation with the disparity calculation and coding. The efficient coding and transmission of the residuals is usually the only practical way to handle occlusions, and the ultimate performance of beginning-to-end systems is usually dominated by the cost of this coding. In this paper we summarize the background principles, explain the innovative features of our implementation steps, and provide quantitative measures of component and system performance.
Visual inspection is, by far, the most widely used method in aircraft surface inspection. We are currently developing a prototype remote visual inspection system, designed to facilitate testing the hypothesized feasibility and advantages of remote visual inspection of aircraft surfaces. In this paper, we describe several experiments with image understanding algorithms that were developed to aid remote visual inspection, in enhancing and recognizing surface cracks and corrosion from the live imagery of an aircraft surface. Also described in this paper are the supporting mobile robot platform that delivers the live imagery, and the inspection console through which the inspector accesses the imagery for remote inspection. We discuss preliminary results of the image understanding algorithms and speculate on their future use in aircraft surface inspection.
We describe a library of image enhancement and understanding algorithms developed to enhance and recognize surface defects from remote live imagery of an aircraft surface. Also described are the supporting mobile robot platform that generates the remote stereoscopic imagery and the inspection console containing a graphical user interface, through which the inspector accesses the live imagery for remote inspection. We will discuss initial results of the remote imaging process and the image processing library, and speculate on their future application in aircraft inspection.
Eye strain is often experienced when viewing a stereoscopic image pair on a flat display device (e.g., a computer monitor). Violations of two relationships that contribute to this eye strain are: (1) the accommodation/convergence breakdown and (2) the conflict between interposition and disparity depth cues. We describe a simple algorithm that reduces eye strain through horizontal image translation and corresponding image cropping, based on a statistical description of the estimated disparity within a stereoscopi image pair. The desired amount of translation is based on the given stereoscopic image pair, and, therefore, requires no user intervention. In this paper, we first develop a statistical model of the estimated disparity that incorporates the possibility of erroneous estimates. An estimate of the actual disparity range is obtained by thresholding the disparity histogram to avoid the contribution of false disparity values. Based on the estimated disparity range, the image pair is translated to force all points to lie on, or behind, the screen surface. This algorithm has been applied to diverse real stereoscopic images and sequences. Stereoscopic image pairs, which were often characterized as producing eye strain and confusion, produced comfortable stereoscopy after the automated translation.
We address the issue of creating stereo imagery on a screen that, when viewed by naked human eyes, will be indistinguishable from the original scene as viewed through a visual accessory. In doing so we investigate effects that appear because real optical systems are not ideal. Namely, we consider optical systems that are not free from geometric aberrations. We present an analysis and confirming computational experiments of the simulations of stereoscopic optical accessories in the presence of aberrations. We describe an accessory in the framework of the Seidel-Schwarzschild theory. That means that we represent its deviation from an ideal (Gaussian) device by means of five constants. Correspondingly, we are able to simulate five fundamental types of monochromatic geometric aberrations: spherical aberration, coma, astigmatism, curvature-of-field, and distortion (barrel and pincushion). We derive and illustrate how these aberrations in stereoscopic optical systems, can lead to anomalous perception of depth, e.g., the misperception of planar surfaces as curved, or even twisted as well as to circumstances under which stereoscopic perception is destroyed. The analysis and numerical simulations also allow us to simulate the related but not identical effects that occur when lenses with aberrations are used in stereoscopic cameras.
A binocular disparity based segmentation scheme to compactly represent one image of a stereoscopic image pair given the other image was proposed earlier by us. That scheme adapted the excess bitcount, needed to code the additional image, to the binocular disparity detail present in the image pair. This paper addresses the issue of extending such a segmentation in the temporal dimension to achieve efficient stereoscopic sequence compression. The easiest conceivable temporal extension would be to code one of the sequences using an MPEG-type scheme while the frames of the other stream are coded based on the segmentation. However such independent compression of one of the streams fails to take advantage of the segmentation or the additional disparity information available. To achieve better compression by exploiting this additional information, we propose the following scheme. Each frame in one of the streams is segmented based on disparity. An MPEG-type frame structure is used for motion compensated prediction of the segments in this segmented stream. The corresponding segments in the other stream are encoded by reversing the disparity-map obtained during the segmentation. Areas without correspondence in this stream, arising from binocular occlusions and disparity estimation errors, are filled in using a disparity-map based predictive error concealment method. Over a test set of several different stereoscopic image sequences, high perceived stereoscopic image qualities were achieved at an excess bandwidth that is roughly 40% above that of a highly compressed monoscopic sequence. Stereo perception can be achieved at significantly smaller excess bandwidths, albeit with a perceivable loss in the image quality.
KEYWORDS: Video, Cameras, Video compression, Composites, Video coding, Signal processing, Computer programming, Motion estimation, Image compression, Video processing
In this paper, we present a new algorithm that adaptively selects the best possible reference frame for the predictive coding of generalized, or multi-view, video signals, based on estimated prediction similarity with the desired frame. We define similarity between two frames as the absence of occlusion, and we estimate this quantity from the variance of composite displacement vector maps. The composite maps are obtained without requiring the computationally intensive process of motion estimation for each candidate reference frame. We provide prediction and compression performance results for generalized video signals using both this scheme and schemes where the reference frames were heuristically pre- selected. When the predicted frames were used in a modified MPEG encoder simulation, the signal compressed using the adaptively selected reference frames required, on average, more than 10% fewer bits to encode than the non-adaptive techniques; for individual frames, the reduction in bits was sometimes more than 80%. These gains were obtained with an acceptable computational increase and an inconsequential bit-count overhead.
Stereoscopic image sequence transmission over existing monocular digital transmission channels, without seriously affecting the quality of one of the image streams, requires a very low bit-rate coding of the additional stream. Fixed block-size based disparity estimation schemes cannot achieve such low bit-rates without causing severe edge artifacts. Also, textureless regions lead to spurious matches which hampers the efficient coding of block disparities. In this paper, we propose a novel disparity-based segmentation approach, to achieve an efficient partition of the image into regions of more or less fixed disparity. The partitions are edge based, in order to minimize the edge artifacts after disparity compensation. The scheme leads to disparity discontinuity preserving, yet smoother and more accurate disparity fields than fixed block-size based schemes. The smoothness and the reduced number of block disparities lead to efficient coding of one image of a stereo pair given the other. The segmentation is achieved by performing a quadtree decomposition, with the disparity compensated error as the splitting criterion. The multiresolutional recursive decomposition offers a computationally efficient and non-iterative means of improving the disparity estimates while preserving the disparity discontinuities. The segmented regions can be tracked temporally to achieve very high compression ratios on a stereoscopic image stream.
We address the issue of creating imagery on a screen that, when viewed by naked human eyes, will be indistinguishable from the original scene as viewed through a visual accessory. Visual accessories of interest include, for example, binoculars, stereomicroscopes, and binocular periscopes. It is the nature of these magnifying optical devices that the transverse (normal) magnification and longitudinal (depth-wise) magnification are different. That is why an object viewed through magnifying optical devices looks different from the same object viewed with the naked eye from a closer distance--the object looks `squashed' (foreshortened) through telescopic instruments and the opposite through microscopic instruments. We rigorously describe the quantitative relationships that must exist when presenting a scene on a screen that stereoscopically simulates viewing through these visual accessories.
Binocular digital imaging is a rapidly developing branch of digital imaging. Any such system must have some means that allows each eye to see only the image intended for it. We describe a time-division multiplexing technique that we have developed for Silicon Graphics Inc. (SGITM) workstations. We utilize the `double buffering' hardware feature of the SGITM graphics system for binocular image rendering. Our technique allows for multiple, re-sizable, full-resolution stereoscopic and monoscopic windows to be displayed simultaneously. We describe corresponding software developed to exploit this hardware. This software contains user-controllable options for specifying the most comfortable zero-disparity plane and effective interocular separation. Several perceptual experiments indicate that most viewers perceive 3D comfortably with this system. We also discuss speed and architecture requirements of the graphics and processor hardware to provide flickerless stereoscopic animation and video with our technique.
All known technologies for displaying 3D-stereoscopic images are more or less incompatible with the X Window System. Applications that seek to be portable must support the 3D-display paradigms of multiple hardware implementations of 3D-stereoscopy. We have succeeded in modifying the functionality of X to construct generic tools for displaying 3D-stereoscopic imagery. Our approach allows for experimentation with visualization techniques and techniques for interacting with these synthetic worlds. Our methodology inherits the extensibility and portability of X. We have demonstrated its applicability in two display hardware paradigms that are specifically discussed.
We exploit the correlations between 3D-stereoscopic left-right image pairs to achieve high compression factors for image frame storage and image stream transmission. In particular, in image stream transmission, we can find extremely high correlations between left-right frames offset in time such that perspective-induced disparity between viewpoints and motion-induced parallax from a single viewpoint are nearly identical; we coin the term `wordline correlation' for this condition. We test these ideas in two implementations, straightforward computing of blockwise cross-correlations, and multiresolution hierarchical matching using a wavelet-based compression method. We find that good 3D-stereoscopic imagery can be had for only a few percent more storage space or transmission bandwidth than is required for the corresponding flat imagery.
We rigorously present the geometric issues related to binocular imaging. We identify the minimum number and most fundamental conceptual set of parameters needed to define 3D-stereoscopic camera and display systems; the fundamental parameter that is needed to specify a 3D-stereoscopic system but not a monocular system is the pupillary distance. We analyze the constraints that are imposed on the values of the parameters by the requirement that the imagery be geometrically indistinguishable from the reality that would be perceived by the `naked' human visual apparatus. We relate our approach to those employed by several well known textbooks and graphics engines.
Under the FAA Aging Aircraft Research Program we are developing robots to deploy conventional and, later, new-concept NDI sensors for commercial aircraft skin inspection. Our prototype robot, the Automated NonDestructive Inspector (ANDI), holds to the aircraft skin with vacuum assisted suction cups, scans an eddy current sensor, and translates across the aircraft skin via linear actuators. Color CCD video cameras are used to align the robot with a series of rivets we wish to inspect using NDI inspection sensors. In a previous paper we provided a background scenario and described two different solutions to the alignment problem: a model-based system built around edge detection and a trainable neural network system. In this paper, we revisit the background and previous research and detail the first steps taken towards a method that will combine the neural and the model based systems: a neural edge detector.
We derive theoretically and demonstrate experimentally an approach to range-from-focus with an important improvement
over all previous methods. Previous methods rely on subjective measures of sharpness to focus a selected locale of the
image. Our method uses measured physical features of the optical signal to generate an objective focus-error distance map.
To compute range-from-focus-error distance it is not necessary to focus any part of the image: range is calculated directly
from the lens formula by substituting the difference between the lens-to-sensor distance and the focus-error distance for the
usual lens-to-image distance. Our method senses focus-error distance in parallel for all locales of the image, thus providing
a complete range image. The method is based on our recognition that when an image sensor is driven in longitudinal
oscillation ("dithered") the Fourier amplitude of the first harmonic component of the signal is proportional to the first power
of the ratio of dither amplitude to focus-error distance, whereas the Fourier amplitude of the second harmonic component is
proportional to the square of this ratio. The ratio of the first harmonic sin ot amplitude A1, to the second harmonic cos 2cot
amplitude B2 is thus a constant (-4) multiple of the ratio of the focus-error distance to the dither amplitude. The
focus-error distance measurement via the ratio of the first-to-second harmonic amplitudes is extremely robust in the sense
that the scene's gray level structure, the spatial and temporal structure of the illumination, and technical noise sources (most
of which affect the Fourier amplitudes multiplicatively) all appear identically in both amplitudes, thus cancelling in the
ratio. Extracting the two Fourier amplitudes and taking their ratio could be accomplished, pixel-by-pixel, by some
ambitious but not outrageous analog computing circuitry that we describe. We derive the method for a point scene model,
and we demonstrate the method with apparatus that instantiates this modeL
Under the FAA Aging Aircraft Research program (grant # G03 19014) we are developing robots to deploy conventional and later new-concept NDI sensors for commercial aircraft skin inspection. Our prototype robot the Automated NonDestructive Inspector (ANDI) holds to the aircraft skin with vacuum assisted suction cups scans an eddy current sensor and translates across the aircraft skin via linear actuators. Color CCD video cameras will be used to align the robot with a series of rivets we wish to inspect in a linear scan using NDI inspection sensors. In this paper we provide a background scenario and describe two different solutions to the alignment problem: a model-based system built around edge detection and a trainable neural network system.
An important part of power line protection system maintenance is the retrospective analysis of fault data to verify that all
elements of the protection system were set properly and operated as they should have. In this paper, we describe an
automated approach to detecting anomalies using data from microprocessor-based digital protective relays. At present, the
systems are specific to the ABB Relay Division MDAR relay family. However, the techniques used are generalizable to
other types and brands of digital relay.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.