Enhancement of Spectral Efficiency and Interference Reduction in D2D Communication

Authors

  • Hemavathi
  • Kalpana R
  • T. N. Vishalakshi
  • S. Thenmozhi Rayan
  • Anita Patil

DOI:

https://doi.org/10.22399/ijcesen.3288

Keywords:

D2D, Spectral Efficiency , Machine Learning , Interference, Deep Neural Network

Abstract

Device-to-Device (D2D) communication is one of the most promising techniques for next-generation wireless networks, including 5G and beyond. It is mainly aimed at minimizing the waste of resources in 5G D2D communication for maximizing the spectral efficiency and minimizing the interference to the original cellular network. Device-to-device (D2D) communication with direct transmission enhances the network performance by reducing the latency. However, it is difficult to allocate resources efficiently while ensuring less interference between D2D links and cellular users. To address this, the machine learning method is adopted focusing on a Random Forest Regressor, which is trained with simulated data to estimate the best resource block allocation. The main parameters comprising data rate, bandwidth, level of interference and power of transmission are taken into account. Extra computations related to spectral efficiency and interference cost drive this optimization process that can vary the allocation of resources for the purpose of throughput maximization. Graphical representations are employed to demonstrate the spectrum-efficiency, bandwidth and interference-cost relationships. In general, the proposed algorithm effectively enhances resource utilization of 5G D2D communication, and the trade-off between the spectrum efficiency and the interference helps optimize the network performance.

References

[1] Q. N. Tran, N. S. Vo, Q. A. Nguyen, M. P. Bui, T. M. Phan, V. V. Lam and A. Masaracchia, “D2D Multi-hop Multi-path Communications in B5G Networks: A Survey on Models, Techniques, and Applications,” EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, vol. 7, no. 25, pp. 1-14, 2021. https://doi.org/10.4108/eai.7-1-2021.167839

[2] P. K. Malik, D. S. Wadhwa and J. S. Khinda, “A Survey of Device to Device and Cooperative Communication for the Future Cellular Networks,” International Journal of Wireless Information Networks, vol. 27, pp. 411-432, 2020. https://doi.org/10.1007/s10776-020-00482-8

[3] D. Prerna, R. Tekchandani and N. Kumar, “Device-to-device content caching techniques in 5G: A taxonomy, solutions, and challenges,” Computer Communications, vol.153,pp.4884,2020. https://doi.org/10.1016/j.comcom.2020.01.057.

[4] M. N. Tehrani, M. Uysal and H. Yanikomeroglu, “Device-to-device communication in 5G cellular networks: challenges, solutions, and future directions,” IEEE Communications Magazine, vol. 52, no. 5, pp. 86-92, 2014. doi: 10.1109/MCOM.2014.6815897.

[5] R.Y. Chang, “D2D with Energy Harvesting Capabilities,” Wiley 5G Ref: The Essential 5G Reference Online, pp. 1-20, 2019. https://doi.org/10.1002/9781119471509.w5GRef19.

[6] P. Chandrakar, R. Bagga, Y. Kumar, S. K. Dwivedi and R. Amin, “Blockchain based security protocol for device-to-device secure communication in internet of things networks,” Security and Privacy, vol. 6, no. 1, pp. e267, 2023. doi:10.1002/spy2.267.

[7] J. M. H. Chow, Y. Zhou, and R. K. Gupta, "Machine Learning Approaches for D2D Communication in 5G Networks," IEEE Wireless Communications Letters, vol. 10, no. 5, pp. 1125-1129, 2021.

[8] Cotton, D., & Chaczko, Z. (2021). Gymd2d: A device-to-device underlay cellular offload evaluation platform. In IEEE wireless communications and networking conference (WCNC), 2021, 1–7.

[9] Liang, L., Li, G. Y., & Xu, W. (2017). Resource allocation for D2D-enabled vehicular communications. IEEE Transactions on Communications, 65, 3186–3197.

[10] Li, Z., & Guo, C. (2019). Multi-agent deep reinforcement learning based spectrum allocation for D2D underlay communications. IEEE Transactions on Vehicular Technology, 69(2), 1828–1840.

[11] Najla, M., Becvar, Z., & Mach, P. (2021). Reuse of multiple channels by multiple d2d pairs in dedicated mode: A game theoretic approach. IEEE Transactions on Wireless Communications., 20, 4313–4327.

[12] Kai, C., Wu, Y., Peng, M., & Huang, W. (2021). Joint uplink and downlink resource allocation for NOMA-enabled D2D communications. IEEE Wireless Communications Letters, 10, 1247–1251.

[13] Van Hasselt, H., Guez, A., & Silver, D. (2016). Deep reinforcement learning with double q-learning. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1), 2094–2100.

[14] Wang, D., Qin, H., Song, B., Du, X., & Guizani, M. (2019). Resource allocation in information-centric wireless networking with D2D-enabled MEC: A deep reinforcement learning approach. IEEE Access, 7, 114935–114944.

[15] Vu, H. V., Liu, Z., Nguyen, D. H. N., Morawski, R., & Le-Ngoc, T. (2020). Multi-agent reinforcement learning for joint channel assignment and power allocation in platoon-based cv2x systems. arXiv:2011.04555.

Downloads

Published

2025-07-06

How to Cite

Hemavathi, Kalpana R, T. N. Vishalakshi, S. Thenmozhi Rayan, & Anita Patil. (2025). Enhancement of Spectral Efficiency and Interference Reduction in D2D Communication. International Journal of Computational and Experimental Science and Engineering, 11(3). https://doi.org/10.22399/ijcesen.3288

Issue

Section

Research Article