AI-Driven Data Placement in Distributed Storage: A Reinforcement Learning Framework for Workload Orchestration and QoS Optimization
DOI:
https://doi.org/10.22399/ijcesen.4316Keywords:
Reinforcement learning, Distributed storage optimization, Workload characterization, Quality of service orchestration, Adaptive resource managementAbstract
Modern distributed storage systems are challenged by increasingly complex optimizations of data placement across heterogeneous resources, without compromising performance and quality of service. Traditional data placement strategies rely on static policies, which cannot adapt to dynamic workload patterns and continuously changing system conditions. This article presents a holistic reinforcement learning framework that makes intelligent data placement decisions in distributed storage. The framework implements a Deep Reinforcement Learning-based placement engine with state-of-the-art policy optimization, a workload characterization system using time-series analysis and anomaly detection, and a QoS-aware orchestration layer with real-time feedback mechanisms. Initial simulations and theoretical analysis indicate great improvements in access latency reduction, resource utilization, and QoS compliance compared to traditional placement strategies, while reducing operational costs substantially. These preliminary findings reveal promising effectiveness in proactive management of dynamic workloads across heterogeneous environments by reinforcement learning-based decision making to bridge critical gaps in existing storage management approaches, and provide a way forward toward more efficient and adaptive storage.
References
[1] Wei-Hua Bai et al., "Performance Analysis of Heterogeneous Data Centers in Cloud Computing Using a Complex Queuing Model," ResearchGate, 2015. [Online]. Available: https://www.researchgate.net/publication/282499693_Performance_Analysis_of_Heterogeneous_Data_Centers_in_Cloud_Computing_Using_a_Complex_Queuing_Model
[2] Thomas Renner, Lauritz Thamsen, and Odej Kao, "Adaptive Resource Management for Distributed Data Analytics Based on Container-level Cluster Monitoring,". [Online]. Available: https://lauritzthamsen.org/assets/texts/RennerThamsenKao_2017_AdaptiveResourceManagementForDistributedDataAnalyticsBasedOnContainer-levelClusterMonitoring.pdf
[3] David Silver et al., "Mastering the game of Go with deep neural networks and tree search," Nature, Volume 529, Pages 484–489, 2016. [Online]. Available: https://www.nature.com/articles/nature16961
[4] Volodymyr Mnih et al., "Human-level control through deep reinforcement learning," Nature, Volume 518, Pages 529–533, 2015. [Online]. Available: https://www.nature.com/articles/nature14236
[5] Sandeep Kumar, "Performance modeling of a distributed file-system," arXiv:1908.10036, 2019. [Online]. Available: https://arxiv.org/abs/1908.10036
[6] Dong Huang, Bingsheng He, and Chunyan Miao, "A Survey of Resource Management in Multi-Tier Web Applications," IEEE Communications Surveys & Tutorials, Vol. 16, No. 3, 2014. [Online]. Available: https://www.comp.nus.edu.sg/~hebs/pub/donghuangsurvey_com14.pdf
[7] Sayed-Chhattan Shah, "Design of a Machine Learning-Based Intelligent Middleware Platform for a Heterogeneous Private Edge Cloud System," Sensors, 2021. [Online]. Available: https://www.mdpi.com/1424-8220/21/22/7701
[8] DataBank, "Efficiency Unleashed: Performance Tuning Techniques For Data Center Environments," 2024. [Online]. Available: https://www.databank.com/resources/blogs/efficiency-unleashed-performance-tuning-techniques-for-data-center-environments/
[9] John A. Miller et al., "A Survey of Deep Learning and Foundation Models for Time Series Forecasting," arXiv:2401.13912, 2024. [Online]. Available: https://arxiv.org/abs/2401.13912
[10] Thanh Thi Nguyen et al., "Deep Reinforcement Learning for Multiagent Systems: A Review of Challenges, Solutions, and Applications," ResearchGate, 2020. [Online]. Available: https://www.researchgate.net/publication/340068468_Deep_Reinforcement_Learning_for_Multiagent_Systems_A_Review_of_Challenges_Solutions_and_Applications
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 International Journal of Computational and Experimental Science and Engineering

This work is licensed under a Creative Commons Attribution 4.0 International License.