Distributed Machine Learning Algorithms: A Comprehensive Review
DOI:
https://doi.org/10.54809/galla.2025.003Abstract
Artificial intelligence has expanded significantly over the last decade due to the demand of users and has made significant advancements in managing complex tasks. The processing and analyzing of this huge amount of data it time time-consuming and needs a lot of computational resources. To address these limitations, distributed machine learning (DML) has appeared with an effective solution, enabling the parallelization of tasks by distributing data, models, or both across multiple servers. This review paper widely examines different strategies and methodologies used in DML, with special emphasis on data parallelism and model parallelism. These methods significantly improve scalability and computational efficiency, leading to faster AI advancements across different sectors such as autonomous driving, healthcare, and recommendation systems. Additionally, the paper offers an extensive overview of the crucial DML algorithms and frameworks, exploring their advantages, practical uses, and limitations. Furthermore, the paper identifies and considers important issues, such as security issues and communication overhead, and makes recommendations for future research directions to create DML systems that are more reliable, scalable, and effective.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Galla

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.