La Trobe
1423463_Teng,Z_2024.pdf (4.85 MB)

Attention Mechanism-Aided Deep Reinforcement Learning for Dynamic Edge Caching

Download (4.85 MB)
journal contribution
posted on 2024-03-15, 04:47 authored by Z Teng, J Fang, H Yang, L Yu, H Chen, Wei XiangWei Xiang
The dynamic mechanism of joint proactive caching and cache replacement, which involves placing content items close to cache-enabled edge devices ahead of time until they are requested, is a promising technique for enhancing traffic offloading and relieving heavy network loads. However, due to limited edge cache capacity and wireless transmission resources, accurately predicting users’ future requests and performing dynamic caching is crucial to effectively utilizing these limited resources. This paper investigates joint proactive caching and cache replacement strategies in a general mobile edge computing (MEC) network with multiple users under a cloud-edge-device collaboration architecture. The joint optimization problem is formulated as a markov decision process (MDP) problem with an infinite range of average network load costs, aiming to reduce network load traffic while efficiently utilizing the limited available transport resources. To address this issue, we design an Attention Weighted Deep Deterministic Policy Gradient (AWD2PG) model, which uses attention weights to allocate the number of channels from server to user, and applies deep deterministic policies on both user and server sides for Cache decision-making, so as to achieve the purpose of reducing network traffic load and improving network and cache resource utilization. We verify the convergence of the corresponding algorithms and demonstrate the effectiveness of the proposed AWD2PG strategy and benchmark in reducing network load and improving hit rate.

History

Publication Date

2024-03-15

Journal

IEEE Internet of Things Journal

Volume

11

Issue

6

Pagination

10197-10213

Publisher

Institute of Electrical and Electronics Engineers

ISSN

2327-4662

Rights Statement

© 2023 The Authors. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/