SAM 3D is helping advance the future of rehabilitation. See how researchers at Carnegie Mellon University are using SAM 3D to capture and analyze human movement in clinical settings, opening the doors to personalized, data-driven insights in the recovery process. 🔗 Learn more about SAM 3D: https://go.meta.me/40d7ab
AI at Meta
Research Services
Menlo Park, California 1,032,726 followers
Together with the AI community, we’re pushing boundaries through open science to create a more connected world.
About us
Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.
- Website
-
https://ai.meta.com/
External link for AI at Meta
- Industry
- Research Services
- Company size
- 10,001+ employees
- Headquarters
- Menlo Park, California
- Specialties
- research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing
Updates
-
SAM 3’s ability to precisely detect and track objects is helping Conservation X Labs measure the survival of animal species around the world and prevent their extinction. 🔗 Learn more about the work: https://lnkd.in/g-mN5NSB We partnered with Conservation X Labs to build the SA-FARI dataset with 10,000+ annotated videos including over 100 species of animals. We’re sharing this dataset to help with conservation efforts around the globe. 🔗 Find it here: https://lnkd.in/eXCjmScP
-
The Segment Anything Playground is a new way to interact with media. Experiment with Meta’s most advanced segmentation models, including SAM 3 + SAM 3D, and discover how these capabilities can transform your creative projects and technical workflows. 🔗 Try it now: https://lnkd.in/gBfFvz3R
-
We’re advancing on-device AI with ExecuTorch, now deployed across devices including Meta Quest 3, Ray-Ban Meta, Oakley Meta Vanguard and Meta Ray-Ban Display. By eliminating conversion steps and supporting pre-deployment validation in PyTorch, ExecuTorch accelerates the path from research to production, ensuring consistent, efficient AI across a diverse hardware ecosystem. Read the full technical deep dive: https://lnkd.in/gjCzabnE
-
-
SAM 3 tackles a challenging problem in vision: unifying a model architecture for detection and tracking. Christoph, a researcher on SAM 3, shares how the team made it possible. 🔗 Read the SAM 3 research paper: https://go.meta.me/6411f7
-
Introducing SAM 3D, the newest addition to the SAM collection, bringing common sense 3D understanding of everyday images. SAM 3D includes two models: 🛋️ SAM 3D Objects for object and scene reconstruction 🧑🤝🧑 SAM 3D Body for human pose and shape estimation Both models achieve state-of-the-art performance transforming static 2D images into vivid, accurate reconstructions. 🔗 Learn more: https://go.meta.me/40d7ab
-
Meet SAM 3, a unified model that enables detection, segmentation, and tracking of objects across images and videos. SAM 3 introduces some of our most highly requested features like text and exemplar prompts to segment all objects of a target category. Learnings from SAM 3 will help power new features in Instagram Edits and Vibes, bringing advanced segmentation capabilities directly to creators. We’re sharing SAM 3 under the SAM License so others can use it to build their own experiences 🔗 Learn more: https://go.meta.me/699549
-
Today we’re excited to unveil a new generation of Segment Anything Models: 1️⃣ SAM 3 enables detecting, segmenting and tracking of objects across images and videos, now with short text phrases and exemplar prompts. 🔗 Learn more about SAM 3: https://go.meta.me/699549 2️⃣ SAM 3D brings the model collection into the 3rd dimension to enable precise reconstruction of 3D objects and people from a single 2D image. 🔗 Learn more about SAM 3D: https://go.meta.me/40d7ab These models offer innovative capabilities and unique tools for developers and researchers to create, experiment and uplevel media workflows.
-
Introducing Meta Omnilingual Automatic Speech Recognition (ASR), a suite of models providing ASR capabilities for over 1,600 languages, including 500 low-coverage languages never before served by any ASR system. While most ASR systems focus on a limited set of languages that are well-represented on the internet, this release marks a major step toward building a truly universal transcription system. 🔗 Learn more: https://go.meta.me/ff13fa Highlights include: - State-of-the-art performance with character error rates below 10 for 78% of supported languages. - First large-scale ASR framework with in-context learning, enabling extension to new languages with just a few audio samples. - A full suite of open source models and a dataset, including Omnilingual w2v 2.0, a 7B-parameter multilingual speech representation model, and Omnilingual ASR Corpus, a unique dataset spanning 350 underserved languages.
-
New from Meta FAIR: Code World Model (CWM), a 32B-parameter research model designed to explore how world models can transform code generation and reasoning about code. We believe in advancing research in world modeling and are sharing CWM under a research license to help empower the community to build upon our work. ➡️ Read the technical report: https://lnkd.in/gJwwqiZB ➡️ Download the open weights: https://lnkd.in/gT9UvANm ➡️ Download the code: https://lnkd.in/g7RXZbwC