PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper discusses parallel computation algorithms and architectures for real-time signal processing, with emphasis on progress toward the hardware realization of a numerical linear algebra library of functions. The objective is to utilize VLSI/VHSIC technology in parallel architectures to provide a real-time equivalent of the LINPACK/EISPACK capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In building a proof of concept model of a systolic processor, several design issues are resolved. Flexibility is achieved through a hierarchy of software which resides on a host computer and through extensive interface and control hardware. Buffer memories with pro-grammable address generators are provided in the interface. The control system is general enough to support command chaining and loops. Each cell of the systolic array is equipped with its own memory which allows a single cell design to perform a number of algorithms. As a result of designing for flexibility, the system can accommodate a wide variety of algorithms, data representations, and problem dimensions. A useful system computation rate of 200 million operations per second (MOPS) is achieved with a peak rate of 350 MOPS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Given an n x p matrix X with p < n, matrix triangularization, or triangularization in short, is to determine an n x n nonsingular matrix Al such that MX = [ R 0 where R is p x p upper triangular, and furthermore to compute the entries in R. By triangularization, many matrix problems are reduced to the simpler problem of solving triangular- linear systems (see for example, Stewart). When X is a square matrix, triangularization is the major step in almost all direct methods for solving general linear systems. When M is restricted to be an orthogonal matrix Q, triangularization is also the key step in computing least squares solutions by the QR decomposition, and in computing eigenvalues by the QR algorithm. Triangularization is computationally expensive, however. Algorithms for performing it typically require n3 operations on general n x n matrices. As a result, triangularization has become a bottleneck in some real-time applications.11 This paper sketches unified concepts of using systolic arrays to perform real-time triangularization for both general and band matrices. (Examples and general discussions of systolic architectures can be found in other papers.6.7) Under the same framework systolic triangularization arrays arc derived for the solution of linear systems with pivoting and for least squares computations. More detailed descriptions of the suggested systolic arrays will appear in the final version of the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A combination of systolic array processing techniques and VLSI fabrication promises to increase signal-processing capabilities by a factor of 100 or more. To achieve a timely marriage of algorithms and hardware, both must be developed concurrently. This article describes the hardware for a programmable, reconfigurable systolic array testbed, implemented with presently available integrated circuits and capable of 32-bit floating-point arithmetic. While this hardware presently requires a small printed circuit board for each processing element, in a few years one or two custom VLSI chips could be used instead, yielding a smaller, faster systolic array processor. This testbed will aid in the evaluation of the many parameters which will have to be optimized in order to design these custom chips.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The systolic array architecture is known to make highly efficient use of hardware in evaluating certain matrix products provided that the matrices are banded strongly. However, this high efficiency can be degraded significantly if the matrices to be processed do not possess the narrow bandwidth feature, but assume a more general structure. This paper introduces and evaluates two techniques which in some instances can enhance systolic array efficiency. The approach effectively reduces to adapting problem structure so that it more naturally fits the systolic array architecture. Potential benefits from this approach are quantified and presented in graphical form.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper considers the computation of the minimum eigenvalue of a symmetric Toeplitz matrix via the Levinson algorithm. By exploiting the relationship between the minimum eigen-value and the residues obtained in the Levinson algorithm, a fast iterative procedure is established to successively estimate the minimum eigenvalue. Although the computational complexity analysis is yet inconclusive, we have found that the approximation of the minimum eigenvalue has an important application in high resolution spectrum estimation problems. Based on simulation results for such an application, some improvements are observed in both the computing speed as well as accuracy of estimates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a formalism for describing the behavior of computational networks at the algorithmic level. It establishes a direct correspondence between mathematical expressions defining a function and the networks which compute that function. By formally manipulating the symbolic expressions that define a function, it is Possible to obtain different networks that compute the function. Certain important characteristics of computational networks, such as computational rate, performance and communication requirements can directly be determined from this mathematical description. The use of this formalism for design and verification is demonstrated on a few computational networks for functions typical in signal processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A signal processing application requiring 50 billion multiplies per second was analyzed. A solution is described which exploits both the significant performance potential of Josephson technology and the powerful elegance of systolic linear arrays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the design for the NYU ultracomputer, a general-purpose MIMD parallel processor composed of thousands of autonomous processing elements. This machine uses an enhanced omega-network to approximate the ideal behavior of Schwartz's paracomputer model of computation and to effeciently implement the important replace-add synchronization primitive. The novelty of the design lies in the enhanced network, in particular in the constituent switches and interfaces. We also present the results of analytic and simulation studies of the network as well as including a sample of our efforts to implement parallel variants of important scientific codes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several new mesh connected multiprocessor architectures are presented that are adapted to execute highly parallel algorithms for matrix alge-bra and signal processing, such as triangular- and eigen-decomposition, inversion and low-rank updat-ing of general matrices, as well as Toeplitz and Hankel related matrices. These algorithms are based on scattering theory concepts and informa-tion preserving transformations, hence they exhibit local communication, and simple control and memory management, all properties that are ideal for VLSI implementation. The architectures are based on two- dimensional "scattering" arrays, that can be folded into linear arrays, either through time-sharing, or due to simple computation wave-fronts, or due to special structures of the matrices involved, such as Toeplitz.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cellular arrays are regular structures of computing and memory elements with fixed and simple modes of communication and control, well suited for implementation using VLSI technology. We describe two alternate architectures for such computational arrays, and compare their performance with sequential processing means.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The architecture of a programmable signal processor which directly supports high-level signal processing functions is described. The instruction set of this processor is the result of a prior study of the mathematical processes involved in a diverse range of signal processing applications. Primary features of this instruction set are instruction factor-ing and the separation of data parameters from program parameters. The instruction factoring technique suggests an underlying technology-independent architecture which allows many efficient, yet flexible implementations. Some example implementations are described, including implementations being developed for the Advanced Onboard Signal Processor (AOSP) program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an historical perspective on the development of signal processing devices including trade-offs between analog and digital approaches. Comparisons will be made between ultra high-speed analog technologies including electro-optics, surface acoustic waves, and charge-coupled devices. Also discussed will be some new architectures for real-time digital processors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cryogenic frequency domain optical memories based upon photochemical hole burning offer the possibility of storing data at densities of up to 1011 bits /cm2. The basic principles of photochemical hole burning are reviewed. Recent results on recording materials, data reading and writing, and configurations are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A device is described which is capable of doing dynamic spatial filtering. The filter, which can be changed at TV frame rates, utilizes a liquid crystal light valve (LCLV) in a controlled-reflectivity mode. A filter pattern can be generated by a CRT or by other optical methods and imaged onto the LCLV. The LCLV is placed in the Fourier plane of an optical transform system. The dynamic spatial filter is described in detail, and current experimental results are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real time coherent optical computation of the ambiguity function by a spatial integration architecture will be presented. The performances of acousto-optic Bragg cells and a space-variant linear phase shifter optical element will be examined. The results of real time processing of synthesized signals with noise will be shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Micro-Vector Processor (MVP) is designed for applications ranging from expendable single-processor weapons and buoys to large multiprocessor federated systems. Its design is centered on multiple applications, easy reprogrammability, and low-power operation. These design goals were achieved with an architecture that provides high throughput at moderate clock rates and maximum use of large-scale integration (LSI) integrated circuits; four new LSI circuits implement 96 percent of the logic in the MVP's vector unit. The MVP software design includes support for both application programming in high-level language and implementation of signal-processing algorithms in a symbolic microprogramming language. These two programmability levels reduce software costs for new applications and for changing requirements. Two examples are used to illustrate MVP applications: cruise missile guidance and a multichannel acoustic beamformer. The MVP architecture, or some close derivative, is considered suitable for reimplementation in very large-scale integration (VLSI).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural Analog Information Processing (NAIP) is an effort to develop general purpose pattern classification architectures based upon biological information processing principles. This paper gives an overview of NAIP and its relationship to the previous work in neural modeling from which its fundamental principles are derived. It also presents a theorem concerning the stability of response of a slab (a two dimensional array of identical simple processing units) to time-invariant (spatial) patterns. An experiment (via computer emulation) demonstrating classification of a spatial pattern by a simple, but complete NAIP architecture is described. A concept for hardware implementation of NAIP architectures is briefly discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A bandwidth compression system is described which uses charge coupled devices to achieve real time compression of standard (512 x 480) video signals. This system is composed of an encoder and a decoder. The encoder performs a two-dimensional (8 x 8) cosine transformation on the incoming video and nonlinearly quantizes the resulting coefficients to reduce the average data rate to 1 bit per pixel. Further reduction of the data rate is accomplished with a memory which inputs quantized data during one frame and outputs this data over several frames. The decoder accepts compressed data from the encoder and converts it back into video. A frame of data is first assembled in a memory and is then read out to the inverse quantizer and inverse cosine transform while a parallel memory is being loaded. The output of the transform is a video signal which, after filtering, is available for display on a standard 525-line monitor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to adapt to the steady growth of modern communication requirements in information processing, the overall computing power is being decentralised to special purpose intelligent terminals. In contrast to the well developed graphic displays, the currently available image displays have very little processing capabilities. The complexity and volume of imagery requires a special approach to enable real-time processing. In this paper a new image computer, called UPIC is proposed which features rather general image processing capabilities at television rates combined with powerful interactive computer graphics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of real time image processing for an Autonomous Acquisition system required the development of a special purpose high speed processor. Commercially available bit slice components were selected for the basic computational structure for speed and versatility. The processor's basic architecture is dynamically alterable into either a serial or pipelined configuration achieving higher speed than either architecture alone could provide. The high speed afforded by this structure is further enhanced by the availability of eight parallel paths allowing a maximum throughput in excess of 40 million operations per second. The algorithms which were implemented for this application include: Sobel edge, shape/connectivity, laplacian, histogram flattening and compression, a sophisticated peak detection scheme, and a "destreaking" function. Being microprogrammable, the processor will allow the implementation of additional algorithms for alter-native applications. Ensuing discussion develops the overall architecture from a functional point of view illustrating the parallelism in the architectural design which allowed the efficient implementation of this general class of algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A BCR adaptive process [1], based on the Conjugate Gradients (CG) method [2], is offered as an alternative to a Sample Matrix Inversion (SMI) [3] approach to solving minimum-mean-square (MMS) problems. In contrast to SMI, BCR does not require that a matrix inverse exist. This point is demonstrated via computer simulation for the case of an adaptive array processing example. Furthermore, BCR lends itself to a simple and efficient fixed-point architecture capable of a numerical accuracy commensurate to sample word lengths, a fact substantiated via a precise computer emulation of the BCR implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optical satellite detection system can provide a high scan coverage rate if the telescope is continuously scanning the sky, rather than stepping and staring. Detection of satellites with such a system requires that the detection processor have a high throughput rate to keep up with the telescope scan. The IMC Signal Processor, described in this paper, has been developed to do this. The high throughput rate has been achieved by dividing the focal plane imaging into five fields-of-view, processing these fields-of-view in a parallel signal processing architecture, and detecting satellites on a several line basis rather than waiting for frame-to-frame comparisons. Although this processor has been developed for use with a particular telescope, the concepts developed here can be applied to a more general detection problem. The narrow (18.3 arc minute) width of the scan will still result in a high scan coverage rate (300 square degrees per hour), and will hopefully detect geo-synchronous satellites with magnitudes as dim as 16-18 My while maintaining a low (4.8 * 10-3) false alarm per second rate. The signal processing considerations and processor algorithms are discussed. The processor hardware is described, recent laboratory results given, and future plans described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hardware median filter is described which is designed to filter imagery at a real-time rate of 10X106 pixels/second. The data is windowed with line buffers, and propagates through n pipe lined stages where n is the number of bits in a pixel. The algorithm described is a form of the radix method of Atamanl modified to reduce the decisionmaking at each stage. Each stage is nearly identical thus making the filter very structured and modular. The filter can be implemented with available logic components and would be useful as a preprocessor in a pattern recognition system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although plasmas are very powerful tools for etching and other operations related to microelectronics, they can vary drastically and are not always predictable. It is in such a dynamically varying and sometimes adverse environment that some of the most critical and delicate circuits are processed. Sensing events in the plasma, real time processing, and plasma control are critical, A real time, low-light-level, continuously observing optical spectrometer is used to indicate the absence or presence of emission or obsorption spectra. Electronically sensing this spectra and fast electronic processing provides the needed sig nals to control the plasma. The system works on light levels below 10-12 watt and obtains processed spectral data in less than 70 microseconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Texas Instruments VHSIC-1 Program is based on a small set of multi-use programmable system components implemented in commercially aligned semiconductor technologies; a multimode fire-and-forget missile subsystem demonstration brassboard and a comprehensive set of software/hardware design tools to support subsystem design with the basic chip set. Eight chips have been defined and are currently under development. There is a high performance NMOS memory and 7 logic oriented components that will be implemented in Schottky Transistor Logic (STL). NMOS was selected for the memory because of the high density, low power and low cost intrinsic to the technology. STL was selected for the logic components because of inherent reliability and tolerance for the military environment and exceptional speed power product characteristics. Many DOD suggested candidate VHSIC brassboard systems share several basic IC related requirements IC related requirements such as: 0 Memory - high performance, low cost. 0 Data processing - data dependent arithmetic, logic and control operations on unstructured data streams. 0 Array processing - repetitive data independent operations on fixed size blocks of data. 0 Limited special purpose processing and interface - some application specific requirements must be supported. The Texas Instruments chip set and related design support tools have been architected and specified to accommodate these needs. The complete chip set is shown in Table 1. The STL logic chips are grouped according to their association with data as array processing requirements. The NMOS SRAM is shown separately. Key specifications for each component are given in the right hand column. A brief functional description of the chips as they apply to the requirements follows. The design support tools will be discussed later with the Design Utility System.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three Hughes VHSIC chips for use in digital anti-jam communications systems are described. These chips will be produced in CMOS/SOS using the Hughes 1.25 micron SOS III process. Common features of four digital communications system applications are reviewed. The advantage of spread spectrum signal processing in a jamming environment is identified. The paper then describes the architecture of each signal processing chip type, their per-formance design goals, and their use in a demonstration with the Army's PLRS/JTIDS Hybrid System as a prototype battlefield information distribution system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Planned growth in the coming decade of Navy combat systems will generate signal processing performance requirements that far exceed the capability of the current Navy standard signal processor. There is a further need to improve the programming environment of Navy standard signal processors to increase programmer productivity. The Navy has initiated development of a second generation standard signal processor, the Enhanced Modular Signal Processor (EMSP), nomenclatured as the AN/UYS-2. This paper describes the Navy program to develop the EMSP as a multi-processor signal processing system. The approach to specifying system performance and programming environment along with an acquisition approach meant to encourage vigorous competition for the engineering development contract award is discussed. The commodity management concept for EMSP's in-service lifetime involves interface management within the system and controlled technology infusion. This important plan to stay abreast of technology and to meet user community requirements for product stability is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Advanced Onboard Signal Processor (AOSP) is a distributed signal processing computer under development for space applications in the Post-1985 time frame. The processor architecture is based on an arbitrary-topology network of identical processing elements specialized to perform signal processing and controlled by a distributed operating system. Both the operating system and applications programs are written in a high order language which is effeciently supported by the processing elements. Examples of communication signal processing are presented which show the suitability of AOSP for this application. The design has been validated by extensive simulation and is presently in the breadboard hardware phase.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An experimental distributed microcomputer concept has been developed, implemented, and is currently operational at the Naval Air Development Center as a vehicle to investigate distributed processing concepts with respect to replacing larger computers with networks of microprocessors at the subsystem or node level. Major benefits being exploited include increased performance, flexibility, system availability, and survivability, by use of multiple processing elements with reduced cost, size, weight and power consumption. This paper concentrates on defining the distributed processing concept in terms of control primitives, variables, and structures and their use in performing a decomposed DFT (Discrete Fourier Transform) application function on a laboratory model. The DFT was chosen as an experimental application to investigate distributed processing concepts because of its highly regular and decomposable structure for concurrent execution. The design assumes interprocessor communications to be anonymous. In this scheme, all processors can access an entire common data base by employing control primitives. Access to selected areas within the common data base is random, enforced by a hardware lock, and determined by task and subtask pointers. This enables the number of processors to be varied in the configuration without any modifications to the control structure. Decompositional elements of the DFT application function in terms of tasks and subtasks, are also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The HEP computer system is a large scale scientific parallel computer employing shared-resource MIMD architecture. The hardware and software facilities provided by the system are described, and techniques found to be useful in programming the system are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The capabilities of Local Area Networks (LAN) to support real-time processing applica-tions are explored. The ISO/ANSI Open Systems Interconnection reference model is summar-ized and its layers examined from the standpoint of implementing functions in VLSI devices. A Network Interface Module (NIM) to the LAN is defined and its characteristics are defined. A discussion of the hardware, firmware, and software network protocol functions and their implementation in the NIM is presented. Using the NIM as the LAN interface, the message paths for different types of network operations are determined. Processing times for these message paths are estimated and used to postulate typical message transfer times. Suggested approaches for continued throughput analyses are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a Distributed Processing System (DPS) geographically separated, functionally organized, task oriented, and connected on an efficient high-speed serial data bus. Military Systems designers are given the working tools to configure a highly respon-sive and adaptable system capable of being phased into current applications with a minimum of disturbance by utilizing current software, yet providing a means for constantly incor-porating new technology. A 20-Mbit serial data bus with a modified SDLC protocol provides efficient communication between nodes through the use of numerous addressing modes. A dual redundant bus with multiple bus capability provides a highly survivable system. The Litton DPS is a relatively new development and its first application, aboard a U.S. Navy ship, is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A facsimile interoperability data compression standard is being adopted by the U.S. Department of Defense and other North Atlantic Treaty Organization (NATO) countries. This algorithm has been shown to perform quite well in a noisy communication channel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radar coverage of a region for air defense or air traffic control often necessitates the use of multiple radars in order to compensate for terrain caused coverage problems and/or guarantee a uniformly good probability of detection over the region. Correlation and utilization of data from these sites by totally automatic means is an essentially unsolved problem. Moreover, computer simulation of this problem is ineffective unless a very detailed real time simulation is performed. The multi-radar simulator has been built to provide the means wherein a real time simu-lation of a system of netted radars, their radar processors, and collocated trackers can be performed. This simulator is an MIMD machine utilizing the building block processor as the processing element. This implementation has proven easy to program and has brought the necessary computational speed to the problem of effectively emulating a set of radars and their processors observing the same air picture from different vantage points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The SIFT computer and its validation methodology represent a state-of-art approach to autonomous fault-tolerant computing for critical control systems. The design was strongly influenced by the intended application (flight control for advanced commercial air transports), but the emphasis on simplicity and provability has general value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the problems of throughput and reliability encountered in designing multi-computer systems for processing real-time sensor data in the mid-1980 time period. The basic microcomputer and minicomputer building block characteristics are identified; characteristics of ring, crossbar, and banyan interconnection networks are quantified; and the form factors for the resulting multicomputer systems are estimated. Techniques for achieving ultra-reliable computing systems--triple-modular redundancy (TMR), dedicated switched-standby spares, pooled switched-standby spares, and hybrid redundancy--are reviewed and their resulting impact on system design is discussed. The hazard function and its impact on the reliability of systems that must remain dormant for considerable periods are discussed. A technique employing pooled standby with fault tolerance and reconfiguration is concluded to provide the most effective solution where size, weight, and power constraints are most severe.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data flow sequencing and the directed graph program representation provide two important tools for the development of computer architectures which can exploit problem parallelism. Classical (control flow) architectures deal efficiently with other problems such as serial sequences and data storage which are not handled as well by a data flow architecture. A hybrid architecture which incorporates features of a data flow architecture along with features of a control flow architecture has the potential of becoming an effective parallel architecture for a wide class of problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent developments in the fields of optical and focal plane technologies have resulted in a proliferation of new advanced sensor types. This is particularly true for sensors developed for ballistic missile defense (BMD). A methodology has been developed to analyze the real-time processing requirements of these sensors, and to define real-time processing architectures using a versatile and flexible set of architectural building blocks currently being developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CHAMP (Cooperative Highly. Available Multi-Processor) is an expandable processor-independent software environment designed to support high availability and fault-tolerant computation. CHAMP uses a network of elements, possibly dissimilar, and achieves speed. through concurrent processing. CHAMP has great potential for signal processing networks, especially with single-chip VESIC elements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.