메뉴 건너뛰기
Library Notice
Institutional Access
If you certify, you can access the articles for free.
Check out your institutions.
ex)Hankuk University, Nuri Motors
Log in Register Help KOR
Subject

Distributed Deep Reinforcement Learning-Based Energy Efficiency Maximization in 3D Cellular Networks
Recommendations
Search
Questions

3차원 셀룰러 네트워크기법에서 분산 심층강화학습 기반 에너지 효율 최대화

논문 기본 정보

Type
Academic journal
Author
Seungmin Lee (한경국립대학교) Tae-Won Ban (경상국립대학교) Howon Lee (한경국립대학교)
Journal
Korea Institute Of Communication Sciences The Journal of Korean Institute of Communications and Information Sciences Vol.48 No.8 KCI Accredited Journals SCOPUS
Published
2023.8
Pages
942 - 949 (8page)
DOI
10.7840/kics.2023.48.8.942

Usage

cover
📌
Topic
📖
Background
🔬
Method
🏆
Result
Distributed Deep Reinforcement Learning-Based Energy Efficiency Maximization in 3D Cellular Networks
Ask AI
Recommendations
Search
Questions

Research history (2)

  • Are you curious about the follow-up research of this article?
  • You can check more advanced research results through related academic papers or academic presentations.
  • Check the research history of this article

Abstract· Keywords

Report Errors
In this paper, we consider the multiple unmanned aerial vehicle-base station(UBS)-based 3D cellular networks to provide air-to-ground(A2G) communication coverage to moving ground users. Especially, to alleviate the short network lifetime problem of the UBS networks, we aim to control the movement and the transmission power of UBS so that maximizing the network-wide energy efficiency. However, considering the dynamic environment in which ground users move, deriving the optimal solution to the problem is significantly difficult with existing iterative methods or optimization methods. Therefore, in this paper, we propose a distributed deep Q-network(DQN)-based UBS control method. Also, to show the advantages of the distributed learning, we introduce two centralized learning methods, and then we consider the two centralized learning method, multi-agent distributed Q-learning(MD-QL) and greedy action(GA) methods as benchmarks. Conclusionally, we verify that the performance of the proposed method outperforms the conventional methods according to the movement speed of the ground user and the number of UBSs.

Contents

요약
ABSTRACT
Ⅰ. 서론
Ⅱ. 시스템 및 채널 모델
Ⅲ. 분산 DQN 기반 UBS 제어 방안
Ⅳ. 실험 결과 및 분석
Ⅴ. 결론
References

References (13)

Add References

Related Authors

Frequently Viewed Together

Recently viewed articles

Comments(0)

0

Write first comments.

UCI(KEPA) : I410-ECN-0102-2023-567-001963064