Gradient releases Echo-2 RL framework to improve AI research efficiency

robot
Abstract generation in progress

Odaily Planet Daily reports that the distributed AI laboratory Gradient has today released Echo-2, a distributed reinforcement learning framework designed to break through the barriers of AI research and training efficiency. By fully decoupling Learner and Actor at the architecture level, Echo-2 reduces the post-training cost of a 30B model from $4,500 to $425. Under the same budget, it delivers over ten times the research throughput.

The framework utilizes compute-storage separation technology for asynchronous training (Async RL), offloading massive sampling computation to unstable GPU instances and heterogeneous GPUs based on Parallax. Coupled with breakthroughs such as bounded staleness, fault-tolerant scheduling, and the self-developed Lattica communication protocol, it significantly improves training efficiency while ensuring model accuracy. Along with the framework release, Gradient will soon launch the RLaaS platform Logits, promoting a shift in AI research from “capital accumulation” to “efficiency iteration.” Logits is now open for global students and researchers to reserve at logits.dev.

It is reported that Gradient is an AI laboratory dedicated to building distributed infrastructure, focusing on the distributed training, service, and deployment of cutting-edge large models.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)