PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper addresses issues on how to support both real-time and non-real-time communication services in a wireless LAN. Unpredictable wireless channel errors may cause applications with real-time traffic to receive degraded quality of services due to packet losses. We propose scheduling algorithms that can take advantages of point coordination function (PCF) of a wireless LAN to support quality of service provisioning for real-time services. Specifically, we
consider two types of service differentiation: (1) absolute delay
differentiated services; and (2) proportional differentiated fair bandwidth services, for real time communication. At the same time, our proposed schemes also try to accommodate best-effort traffic to minimize the delay experienced by best traffic. One challenging issue involved is to consider the packet loss due to channel bit errors. We also establish conditions to admit a new real-time connection. Preliminary performance evaluation of the proposed schemes is
conducted to demonstrate how one of proposed scheme works as well as to study its effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work analyses and compares the performance of the recently proposed micro-mobility protocols HAWAII and Hierarchical MIP when submitted to several levels of traffic load and scenarios that effectively degrade the low packet loss feature of these protocols. Furthermore, it is shown that the use of differentiated services within a given domain infrastructure results in a considerable increase of performance for mobile nodes. The preferential treatment offered to such mobile nodes protects micro-mobility protocols traffic from the fluctuations of background traffic with losses only occurring as a result of handoff events.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An ad hoc network consists of mobile nodes (hosts) that are equipped with wireless transmitters and receivers, which allow them to communicate without the help of wired base stations. With the increasing acceptance of the ad hoc networks the demand for guaranteeing QoS to support various real-time applications has become a major research are. Guaranteeing QoS in ad hoc networks poses a challenging problem, because of the constantly changing network scenario along with the ever changing the QoS state information associated with nodes and bringing in imprecision about the available state information.
In our attempt to propose a solution for QoS routing in ad hoc networks we approach this problem by first proposing an efficient routing algorithm and then incorporate QoS routing into the algorithm. The main thrust in our routing algorithm is to reduce the indeterminism factor by characterizing the node movements to various mobility patterns, which can be expressed by mathematically bound functions. Thus in our attempt to quantify the randomness of the motion of mobile nodes, we try to approximate the movements of the mobile nodes and the network as a whole to one of the defined mobility patterns. Then use this information pre-compute routes to a destination before the current link is snapped as now we can predict the possible future locations of the nodes, we can attempt to guarantee QoS as now can also predict the state of a link. Thus with the knowledge of mobility pattern we are in a position to save lot of overhead which the routing algorithm could have incurred either due to a new route initiation process in the event of route failure, in addition to that now we also don’t have to buffer the packets which otherwise had to be buffered at an intermediate node till the new route was computed or in the process of taking collecting the status of each node in case of QoS routing.
To sum up our algorithm has four key features 1) establishment of low cost path satisfying the required QoS; 2) the route discovery process comes up with multiple paths which are to be used in case of the failure of the primary path and thus increasing the robustness; 3) the path discovered are more robust as the algorithm takes into consideration stationary and transient links and 4) as our algorithm is not a table driven one it is highly scalable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several recovery mechanisms are provided in Generalized Multi-Protocol Label Switching (GMPLS) networks to improve the network survivability. Future wired backbone networks will definitely be GMPLS-based, and GMPLS networks must provide an efficient recovery scheme to provision mobility-aware capabilities in wireless IP networks. The purpose of this paper is to propose a GMPLS-based recovery scheme for fast handoff in wireless IP networks. The
proposed scheme can establish new label switched path (LSP) rapidly by utilizing the backup resources when mobile node (MN) handoffs in wireless networks. Therefore, low handoff latency can be achieved. And, the resource utilization in GMPLS networks can be improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An efficient routing protocol is essential to guarantee application level quality of service running on wireless ad hoc networks. In this paper we propose a novel routing algorithm that computes a path between a source and a destination by considering several important constraints such as path-life, availability of sufficient energy as well as buffer space in each of the nodes on the path between the source and destination. The algorithm chooses the best path from among the multiples paths that it computes between two endpoints. We consider the use of control packets that run at a priority higher than the data packets in determining the multiple paths. The paper also examines the impact of different schedulers such as weighted fair queuing, and weighted random early detection among others in preserving the QoS level guarantees. Our extensive simulation results indicate that the algorithm improves the overall lifetime of a network, reduces the number of dropped packets, and decreases the end-to-end delay for real-time voice application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The basis of a QoS-based routing algorithm is a dynamic network dependent cost function that is used to find the optimal or at least a feasible route across the network. However, all QoS-based routing algorithms suffer from a major drawback. The cost function at the core of the algorithms identifies segments of the network where resources are ample and exploits them to the benefit of connections that would otherwise cross a congested portion of the network. Thus, the algorithms consume more resources than Minimum Hop routing would do when the network traffic is non-stationary and heavy. QoS-based routing, thus, wastes resources and performs poorly compared with Minimum Hop routing in the event of congestion. The crux of the discussion is that whatever is gained at low or medium network loads, is offset at high network loads. What is required is a resilient algorithm that either allows the migration of a QoS-based routing algorithm to a Minimum Hop algorithm at high loads or an algorithm that merges Minimum Hop and QoS characteristics. The study opts for the latter approach and proposes and exhibits a hop constrained QoS routing algorithm that outperforms traditional QoS routing algorithms during simulation. This routing technique is based on an approximation algorithm that solves the hop constrained routing problem. The algorithm is derived from a dynamic programming FPAS scheme and finds the shortest walk for a single source destination pair in a graph with restricted number of hops when all the edge costs are non-negative. Simulated results demonstrate that routing technique based on the algorithm is robust to changes in the traffic pattern and consistently outperforms other QoS based routing techniques under heavy load conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
QoS guarantees need to be constant along the entire path between source and destination, i.e. end-to-end. However, for traffics across several domains, the domains are often with different administrations and technical characteristics. To guarantee QoS in the inter-domains is becoming a challenge for end-to-end QoS. Especially, for DiffServ flows traversing a transit WDM network in the Optical Internet, there exits a QoS gap between the DiffServ-aware MPLS sub-network and the MPλS sub-network. In this article, we study the convergence of the QoS schemes in these two sub-networks to achieve end-to-end QoS in the Optical Internet. We first give a review of QoS models in the IP network and WDM optical network and compare their difference for supporting QoS. Then, we study the end-to-end QoS mechanism in the Optical Internet, and propose a QoS mapping method between the IP/MPLS sub-network and the MPλS sub-network. Based on the QoS mapping method, an end-to-end QoS guaranteed LSP provisioning scheme in the Optical Internet is presented at this end of the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Congestion in the Internet results in wasted bandwidth and also stands in the way of guaranteeing QoS. The effect of congestion is multiplied many fold in Satellite networks, where the resources are very expensive. Thus congestion control has a special significance in the performance of Satellite networks. In today's Internet, congestion control is implemented mostly using some form of the de facto standard, RED. But tuning of parameters in RED has been a major problem throughout. Achieving high throughput with corresponding low delays is the main goal in parameter setting. It is also desired to keep the oscillations in the queue low to reduce jitter, so that the QoS guarantees can be improved. In this paper, we use a previously linearized fluid flow model of TCP-RED to study the performance and stability of the Queue in the router. We use classical control tools like Tracking Error minimization and Delay Margin to study the performance, stability of the system. We use the above-mentioned tools to provide guidelines for setting the parameters in RED, such that the throughput, delay and jitter of the system are optimized. Thus we provide guidelines for optimizing satellite IP networks. We apply our results exclusively for optimizing the performance of satellite networks, where the effects of congestion are much pronounced and need for optimization is much important. We use ns simulator to validate our results to support our analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the cost of Internet access rises and the amount of deployed bandwidth increases, a way to make efficient use of the oft-unused bandwidth is desired. Simply providing a lower priority for traffic than best effort allows this bandwidth to be used without noticeable interference with regular traffic. Because bursts of normal traffic are given priority over this background, or filler, traffic, a more aggressive congestion control protocol is called for in the filler traffic. In our paper, we compare numerous versions of TCP-like congestion control of our own design, over which to implement low-priority traffic, by using the unused bandwidth at any given time. These protocols are divided into six “classes,” which differ by the core congestion control algorithm and use different constants. Using the ns-2 network simulator, we collected network traces using each of our protocols in different network configurations, with multiple parameters for each configuration. These configurations simulated high- and low-bandwidth and latency networks. We compared the resulting throughput and sharing - the cumulative variation of throughput over each stream, normalized by the total throughput over the link - to our chosen baseline, TCP Sack. Most of the basic algorithms performed as well as or better than Sack in a background traffic environment, especially in terms of throughput. Using features from multiple classes, we also designed a more complex protocol that performed better than Sack in almost every environment, and performed better than the other algorithms in general.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An effective QoS-constrained IP network must incorporate the time-priority scheduling paradigm. Policies based on priorities can be exerted to different layers of a packet switching network. For example, at the admission control layer, real-time applications should have the priority higher than non-real-time ones to get the required connection. Preemption is associated with priority. A scheduling discipline is non-preemptive if, once a stream has been given the service like a transmitter, the service cannot be taken away until the job is complete. It has been well known that, at the layers other than the connection layer, the traffic usually presents a self-similar (SSM) behavior. One primary attribute of the SSM traffic is the heavy tailed (HT) distributions. In this paper, we propose several capacity allocation models taking the following features into account: (1) The packet inter-arrival time follows the exponential distribution; (2) The packet length follows the Pareto distribution; (3) There are multiple priority classes; and (4) The low priority class can be preempted by a high priority class. The new models are mainly used at the connection level due to feature (1). However, the new models distinguish themselves from those conventional ones due to features (2), (3), and (4).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the increasing demand for mobility among the Internet users, there is a urgent requirement to identify and solve the deficiencies in the wireless domain. One such urgent problem is the poor performance of TCP over wireless links. TCP still being the only protocol used in the Internet for reliable transfers, the assumptions made by TCP in the wired domain are not valid in the wireless domain. To enhance the performace of the TCP in the wireless domain, we need to differentiate the 'congestion loss' from the 'wireless loss'. We find that the previous attempts in this direction make unjustified demands from the network or the accuracy of the schemes are inadequate. We are convinced that reliable transport is a end-to-end semantic and other networkcomp onents should not be burdened with this work. In this paper we propose a scheme called the 'Source-Centric Congestion Filtering', based on the MECN protocol, which tries to differentiate the losses based on the networkfeedback.
Our simulations using the NS-2 simulator shows that our protocol has very less percentage of error and performs better than most of the other end-to-end TCP variants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SLA (Service Level Agreement) and its management have become more and more important for both service providers and customers, and the measurement of service level is fundamental for other management processes. The paper discusses the differences between service measurement and traditional network measurement from the perspective of sampling, and then deduces a theoretical lower-limit on sampling frequency. To reduce the number of samplings and keep high accuracy of service level data at the same time, the paper presents an
adaptive algorithm based on estimation method. The random function is used to determine the actual sampling time in each sampling period, which is divided into a number of small even intervals. Then the auto-regressive model is chosen to estimate the current sampling value
based on the previous sampling results. If the estimation result is within the range of probability accepted, sampling at this interval is given up. Thus, this algorithm reduces sampling's influence on the measured service by decreasing the sampling numbers. The experiments verify the effectiveness of this algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new active queue management mechanism called the RIO-SD (RED IN and OUT with Selective Dropping) to control ill-behaved flows in DiffServ networks. Under this scheme, core routers are not required to maintain per-flow state, and the ill-behaved flows can be identified based on the drop history of the "OUT-profile" virtual queue. Control is effected by placing two pre-filters in front of the "IN-profile" and "OUT-profile" virtual queues respectively. Simulation results indicate that our approach can also improve the performance of other normal flows. Our work demonstrates that our algorithm is robust and simple to use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To provide QoS control for real time traffic in core routers, this paper designs and evaluates a 320 Gb/s switch system, which supports 16 line cards, each operating at OC192c line rate (10 Gb/s). This switch system contains a high performance switch fabric and supports variable-length IP packet interface. These two characters provide advantages over traditional switch fabrics with a cell interface. This switch system supports eight priorities to both unicast and multicast traffic. The highest priority with strict QoS guarantee is for real time traffic, and other seven lower priorities with weighted round-robin (WRR) service discipline are for other common data traffic. Through simulation under multi-priority burst traffic model, we demonstrate this switch system not only can provide excellent performance for real time traffic, but also can efficiently allocate bandwidth among all kinds of traffic. As a result, this switch system can serve as a key node in high-speed networks, and it can also meet the challenge of multimedia traffic to the next generation Internet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-selectivity in fast fading channels has been one of big obstacles to attain high efficiency and diversity gain for space-times codes. Orthogonal-frequency-division-multiplexing (OFDM) combined with space-time trellis codes has been introduced to combat effect of frequency-selectivity. Subsequently, space-time block codes, space-frequency codes and concatenation with diverse error-correcting codes were investigated. However, for multiple-input multiple-out (MIMO) OFDM systems, to design high-efficiency robust space-time-frequency codes becomes one of big problems. In this paper, we presented pairwise error probability analysis of space-frequency-time block codes, and then concluded one general design criterion for space-time-frequency block codes by utilizing the frequency-correlation feature of sub-channel in mobile OFDM systems. Then, we investigated performance of space-frequency block codes for MIMO-OFDM systems with 4 transmit antennas. And comparison between space-frequency block codes and space-time block codes is performed with different number of transmit antenna. Link-level simulations on COSSAP® verified analytical results. The results show that SFBC is superior to STBC in fast fading channel. And power-delay profile of fading channel, i.e., channel order and delay spread, must be included into design consideration on space-time-frequency codes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new scheme to control the transmission power, allocate subcarriers, and choose modulation schemes for each mobile terminal in a multiuser OFDM system is investigated in this work. This proposed scheme attempts to make OFDM systems more flexible and robust to channel variations along time. The problem is decomposed to two stages. In the first stage, we maximize the minimum signal-to-noise ratio (SNR) of subchannels subject to constraints. The total system performance should be maintained at an acceptable level in this stage. Based
on the result of the first stage, we choose modulation schemes for subchannels of each mobile terminal with an objective to maximize the total transmission rate while satisfying the performance requirement of each mobile terminal in the second stage. Simulations are conducted under a varying number of mobile terminals with a range of symbol error rates (SERs). Simulation results show that the proposed scheme works well under certain operating environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The third-generation (3G) wireless network is a convergence of
several types of telecommunication networks to support various
wireless data services. Wireless LAN also supports mobility via
mobile IP. As a result, the convergence and mobility have
potential vulnerability in security. In this paper, a
Denial-of-Service (DoS) attack which can waste wireless resource
by sending a large number of nuisance packets to the spoofed
destination address of IP packets is introduced. To effectively
prevent the attack, fast detection, reliability, and efficiency
with small overhead are suggested as requirements in a detection
system. We propose a detector using Hidden Markov Model (HMM) to
achieve these requirements and reduce the influences of the attack
as fast as possible. The generation of the HMM for the detector
are discussed and the operation of the detector are described.
Weighting factors and second order Markov models are employed to
improve the reliability of the detector. The proposed system is
compared with the existing sequential detection approach in terms
of the false alarm rate and optimum detection time interval to
evaluate the performance of the detectors. Our simulation results
using ns-2 simulator shows that the proposed HMM detector is
reliable and fast to detect the attack due to its dynamic
property.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wireless environments present many challenges for secure multimedia access, especial streaming media. The availability of varying network bandwidths and diverse receiver device processing powers and storage spaces demand scalable and flexible approaches that are capable of adapting to changing network conditions as well as device capabilities. To meet these requirements, scalable and fine granularity scalable (FGS) compression algorithms were proposed and widely adopted to provide scalable access of multimedia with interoperability between different services and flexible support to receivers with different device capabilities. Encryption is one of the most important security tools to protect content from unauthorized use. If a medium data stream is encrypted using non-scalable cryptography algorithms, decryption at arbitrary bit rate to provide scalable services can hardly be accomplished. If a medium compressed using scalable coding needs to be protected and non-scalable cryptography algorithms are used, the advantages of scalable coding may be lost. Therefore scalable encryption techniques are needed to provide scalability or to preserve the FGS adaptation capability (if the media stream is FGS coded) and enable intermediate processing of encrypted data without unnecessary decryption. In this paper, we will give an overview of scalable encryption schemes and present a fine grained scalable encryption algorithm. One desirable feature is its simplicity and flexibility in supporting scalable multimedia communication and multimedia content access control in wireless environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wireless channels are error prone communication channels. It is well known that multimedia data can be vulnerable to transmission errors, to various degrees. Error control, such as forward error control (FEC) and automatic repeat request (ARQ,) and error concealment techniques have been developed to combat transmission errors for robust multimedia communications. The efficiency and effectiveness of an error recovery technique rely on the system error detection capabilities. We review techniques proposed in the area of error detection via data hiding in this paper. Typical errors can be classified into several categories. We discuss conventional error types, review algorithms using data hiding for transmission error detection developed in the last several years, and propose future work directions, especially for robust wireless multimedia communication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiuser detection constitutes a class of advanced interference mitigation techniques for increasing the capacity of CDMA communication systems. Thus far, the work has been carried out under the Gaussian noise assumption for analytical convenience and yet physical noise encountered in real-life channels is impulsive and decidedly non-Gaussian. Since Gaussian signal processing schemes can perform poorly in impulsive noise, the applicability and performance of such multiuser detectors in realistic channels become strongly questionable.
This paper addresses the development of non-Gaussian techniques for CDMA communications, by first examining the performance degradation of Linear Gaussian-based multiuser detectors in impulsive noise and then by presenting a series of nonlinear techniques to yield a more robust performance. A common approach to linear adaptive interference suppression in Direct Sequence CDMA is based on the Least Mean Square (LMS) or Recursive Least Square (RLS) algorithms to capture the cyclo-stationarity of multiple access interference (MAI) adaptively, mostly under the minimum mean squared error (MMSE) criterion. However, under impulsive noise environments, the performance of the conventional RLS algorithm deteriorates substantially, and therefore, a robust algorithm based on nonlinear RLS is suggested to obtain a modified CDMA receiver structure.
Simulation results are presented to demonstrate that the proposed modified nonlinear RLS algorithm significantly outperforms the conventional RLS algorithm whilst it maintains comparable performance in Gaussian channels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work studies the problem of node clustering for wireless sensor networks. In this context, clustering is intended as the process of electing leader nodes and partitioning the remaining nodes among the leaders. Clustering is needed in any tasks and applications for sensor networks that require some form of locally centralized
processing. This work proposes a graphical model that represents node clustering as a graph cutting problem. This model is considered in different application contexts. In order to deal with a reduced number vertices and edges, a semi-dynamic clustering strategy is proposed: i.e. new clusters are formed over an a priori defined sensor
partitions. Existing clusters are split into subclusters A cluster or a portion of it is just viewed as a macronode. Finally an heuristic implementation of the above ideas is applied to a multiple target tracking scenario in particular to the tasks of multiple target counting and localization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Edge sensor detection is often used in identifying regions that are affected by various factors in wireless sensor networks. A statistical methodology based on distributed detection theory and the Neyman-Pearson criterion is developed for edge sensor detection in this research. The input sensor statistics are assumed to be identically independently distributed in our framework. Edge regions and sensors are determined using a hypothesis test, where the observation model for each hypothesis is derived. A sub-optimal distributed detection scheme, which is optimal among detectors having the same test at all local sensors, and the way of choosing the optimal operating point are described. The condition under which the proposed scheme outperforms the optimum detector based on a single sensor is presented. Furthermore, the noisy channel effect is considered, and a method to overcome this noisy effect is addressed. The performance of the proposed distributed edge sensor detection scheme is studied via computer simulation, where the ROC curves are used to demonstrate the tradeoff between the cost (in terms of the sensor density) and detection accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider an AWGN multiple access channel in which two correlated senders send information to a common receiver. In order to reduce the required transmission power, each one of the senders is encoded independently using a standard turbo code. The correlation model is not assumed to be known at the encoder. The decoder consists of two turbo decoders, each associated to a different sender, which exchange extrinsic information to exploit the correlation existing between the senders. The resulting performance is close to the theoretical limits obtained when separation between source and channel coding is assumed. This occurs even when the correlation between senders and the noise variance are unknown at the decoder site, since they can be estimated jointly with the decoding process with little performance degradation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.