The Cognitive Brain.
Mount the world's most powerful Vision-Language-Action (VLA) models. Plug-and-play intelligence optimized for robotic inference.
Physical Intelligence Zero
A general-purpose robotic foundation model trained on diverse physical tasks.
$ rosclaw mount model π0Robotic Transformer 2
Google DeepMind's vision-language-action model for general robotic control.
$ rosclaw mount model rt-2Action Chunking with Transformers
Efficient imitation learning with action chunking for smooth robot motions.
$ rosclaw mount model actDiffusion Policy for Visuomotor Learning
State-of-the-art visuomotor policy using diffusion models for multi-modal action distributions.
$ rosclaw mount model diffusion policyMore models coming soon. Submit yours via GitHub