Alibaba has released RynnBrain, a new open-source AI foundation model dedicated to embodied intelligence and robotics, as Chinese tech leaders accelerate efforts in “physical AI”.
Developed by Alibaba’s DAMO Academy, RynnBrain equips robots and smart devices to interact with the real world. It combines spatial understanding with temporal awareness, allowing machines to map objects, predict trajectories, navigate cluttered spaces such as kitchens or factory floors, and plan step-by-step actions toward task completion.
Trained on Alibaba’s Qwen3-VL vision-language model, RynnBrain achieves state-of-the-art results across 16 embodied open-source evaluation leaderboards. Alibaba reports it surpasses competitors including Google’s Gemini Robotics-ER 1.5 and Nvidia’s Cosmos-Reason2.
The company open-sourced the full series, including seven models with sizes from 2 billion parameters up to a 30 billion-parameter mixture-of-experts architecture. Developers can access them on platforms such as Hugging Face and GitHub.
This launch marks Alibaba’s entry into the robotics market and builds on the success of its Qwen family, among China’s most advanced AI models. It positions the firm in a growing global competition for physical AI, where software drives machines in manufacturing, logistics, hospitality, and beyond.
Imagine a warehouse robot that not only sees a box but anticipates its movement and plans a safe path around obstacles: RynnBrain enables such spatiotemporal memory and physical reasoning, capabilities that could transform automated operations.
The move highlights China’s heavy investment in robotics as a strategic priority. By open-sourcing RynnBrain, Alibaba invites broader developer contributions, potentially accelerating innovation in embodied AI worldwide.
Author:Oje. Ese
