Unmanned Aerial Vehicle (UAV)-powered Wireless Sensor Networks (WSNs) are considered to be a promising solution to the problem of the limited power of the Sensor Nodes (SNs). In this paper, we introduce a UAV-powered WSN system, where multi-UAVs undertake the role of remote charging stations. To optimize the overall power efficiency, we design the collaborative trajectories of the UAVs. In order to solve the problem of trajectory planning, we first model the service process as a Markov decision process (MDP), and then propose a Multi-Agent Deep Reinforcement Learning (MADRL) based algorithm named Modified Multi-agent Deep Deterministic Policy Gradient (M2DDPG), which is learned centrally and executed discretely. The simulation has demonstrated the validity, efficacy, and superior performance of the proposed M2DDPG algorithm compared to the baseline algorithm.
Edge computing and network slicing are two key technologies to reduce communication latency and improve network flexibility in fog radio access network (F-RAN). Due to the existence of the massive potential offloading decisions, in this paper, we develop a joint computing offloading and resource allocation strategy to minimize the total energy consumption of the cloud-edge system. In order to meet the quality of service (QoS) of different devices, two different radio access network (RAN) slices are designed. Besides, considering the curse of dimensionality caused by the explosive growth of the UEs, we propose a deep Q-learning (DQN) algorithm, which uses value function approximation to compress the status dimension. Moreover, to reduce the complexity of the algorithm, the problem is divided into two subproblems, which are joint radio resource allocation and fog access point (FAP) selection problem and cloud side task forwarding problem and solved by DQN and greedy algorithm separately. Through simulation, we demonstrate that the method proposed in this paper can effectively reduce the total system energy consumption and shorten the convergence time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.