Distributed Machine Learning Algorithms: A Comprehensive Review

Authors

  • Nashma Taha Muhammed Author
  • Sarkar Hasan Ahmed Author

DOI:

https://doi.org/10.54809/galla.2025.003

Abstract

Artificial intelligence has expanded significantly over the last decade due to the demand of users and has made significant advancements in managing complex tasks. The processing and analyzing of this huge amount of data it time time-consuming and needs a lot of computational resources. To address these limitations, distributed machine learning (DML) has appeared with an effective solution, enabling the parallelization of tasks by distributing data, models, or both across multiple servers. This review paper widely examines different strategies and methodologies used in DML, with special emphasis on data parallelism and model parallelism. These methods significantly improve scalability and computational efficiency, leading to faster AI advancements across different sectors such as autonomous driving, healthcare, and recommendation systems. Additionally, the paper offers an extensive overview of the crucial DML algorithms and frameworks, exploring their advantages, practical uses, and limitations. Furthermore, the paper identifies and considers important issues, such as security issues and communication overhead, and makes recommendations for future research directions to create DML systems that are more reliable, scalable, and effective.

Downloads

Published

2026-01-25

Issue

Section

Articles