PH Deck logoPH Deck

Fill arrow
V-JEPA 2
Brown line arrowSee more Products
V-JEPA 2
Meta's world model for physical world understanding
# Research Tool
Featured on : Jun 13. 2025
Featured on : Jun 13. 2025
What is V-JEPA 2?
V-JEPA 2 is Meta's new world model, trained on video to understand and predict the physical world. It enables zero-shot robot planning and sets SOTA benchmarks in visual understanding. Model, code, and new benchmarks are now open.
Problem
Users rely on traditional AI models that lack comprehensive video-based training, leading to inefficient understanding of physical interactions and environments due to dependency on labeled data and limited generalization.
Solution
V-JEPA 2 is an open-source AI model that learns from video data to predict and plan physical-world interactions, enabling zero-shot robotic planning without task-specific training. Example: Robots can autonomously navigate dynamic environments using pre-trained visual understanding.
Customers
AI researchers, robotic engineers, and developers focused on autonomous systems, particularly those requiring scalable visual understanding and physical task automation.
Unique Features
Open-source video-trained model, zero-shot planning capabilities, state-of-the-art visual benchmarks, and open-sourced code and datasets for community-driven development.
User Comments
Sets new benchmarks in visual understanding for robotics.
Enables zero-shot adaptation in unseen environments.
Reduces reliance on labeled training data for AI systems.
Potential to transform industrial automation and autonomous vehicles.
Open-source approach accelerates AI research and deployment.
Traction
Launched as Meta's foundational AI research, V-JEPA 2 achieved 1,600+ Product Hunt upvotes. Part of Meta FAIR’s open-sourced AI portfolio, with adoption in academia and industry for robotic applications.
Market Size
The global AI in robotics market is projected to reach $70 billion by 2030, driven by demand for autonomous systems in manufacturing, logistics, and healthcare.