The heart of the system is the newly released latest AI models update and its companion, Gemini Robotics-ER 1.5. Together, they operate like a split brain, with one model scanning the room and digging up fresh data, and the other model using that knowledge to carry out order after order. Finding out how to compost or recycle whatever is in front of the robot is just a few web clicks away, and the robot then describes its new plan back to itself before springing into action.
Robots Learning from Each Other
What truly changes the game is how robots can now transfer what they learn from one machine to another—no matter what shape or size. The ALOHA2 arms might discover a new skill sorting socks, and within moments that expertise can shift over to a very different machine like the Franka robot or even Apptronik’s Apollo robot, which looks and moves much more like a person.
“That means we can control very different robots with a single model, including a humanoid,” Kanishka Rao, a software engineer on the team, said. He added, “Skills that are learned on one robot can now be transferred to another robot.”
Developers eager to try these upgrades out will get access to Gemini Robotics-ER 1.5 through the standard Gemini API found in Google AI Studio. The more advanced Gemini Robotics 1.5 model is still limited to select partners, promising further leaps once it opens more widely. This new ability to reason, share, and check the web for custom instructions signals a major step forward, making robots not just useful, but genuinely adaptable deep reasoning.