Yandex Launches YaFSDP

Yandex has unveiled YaFSDP, a groundbreaking open-source tool designed to enhance the training efficiency of large language models (LLMs).

This innovative method stands out as the most effective publicly available tool for improving GPU communication and reducing memory usage during LLM training.

With the potential to speed up training by up to 26%, YaFSDP promises significant cost savings and performance enhancements for developers and companies.

Key Benefits of YaFSDP

  • Improved Efficiency: YaFSDP reduces GPU communication inefficiencies, optimizing memory usage and ensuring uninterrupted GPU interactions. This results in a speedup of up to 26% compared to FSDP, depending on the architecture and number of parameters.
  • Cost Savings: By decreasing the training time and GPU resources needed, YaFSDP can save developers and companies hundreds of thousands of dollars monthly. For instance, in scenarios involving models with 70 billion parameters, it can save the resources of approximately 150 GPUs, translating to potential savings of $0.5 to $1.5 million per month.
  • Open-Source Accessibility: YaFSDP is freely available on GitHub, enabling machine learning engineers and companies worldwide to benefit from its efficiency improvements.

Contributions to the Global AI Community

Yandex has made YaFSDP accessible to the global AI community as part of its ongoing commitment to support advancements in machine learning.

The tool’s release follows a series of successful open-source projects by Yandex, including CatBoost, YTsaurus, AQLM, and Petals, which have all gained popularity among ML professionals.

Mikhail Khruschev, a senior developer at Yandex, emphasized the team’s dedication to improving LLM training methods.

“Currently, we’re actively experimenting with various model architectures and parameter sizes to expand YaFSDP’s versatility. We are thrilled to share our developments in LLM training with the global ML community, contributing to increased accessibility and efficiency for researchers and developers worldwide.”

YaFSDP’s Technical Advantages

YaFSDP builds upon the foundation of FSDP, excelling in communication-heavy stages of LLM training such as pre-training, alignment, and fine-tuning. Its effectiveness has been demonstrated on models ranging from 13 to 70 billion parameters, particularly in the 30 to 70 billion range. This makes YaFSDP especially suitable for widely-used open-source models based on the LLaMA architecture.

A Closer Look at LLM Training Challenges

Training LLMs is a resource-intensive process, demanding substantial computing power, processor memory, and efficient processor communications. YaFSDP addresses these challenges by improving GPU communication and optimizing memory usage. This leads to faster training times and reduced costs.

During LLM training, computations are distributed among numerous GPUs organized into clusters. Inefficient communication between these processors can become a bottleneck, slowing down the training process. YaFSDP mitigates this issue by eliminating GPU communication inefficiencies, optimizing network usage, and reducing memory load.

Future Prospects

The introduction of YaFSDP marks a significant step forward in LLM training efficiency. As Yandex continues to experiment with different model architectures and parameter sizes, the tool’s versatility and applicability are expected to grow, benefiting the broader machine learning community.

For more details and to access YaFSDP, visit the GitHub repository.


Dan Taylor
Dan Taylor is an experienced SEO consultant and has worked with brands and companies on optimizing for Russia (and Yandex) for a number of years. Winner of the inaugural 2018 TechSEO Boost competition, webmaster at and Sloth.Cloud, and founder of