TwelveLabs’ cover photo
TwelveLabs

TwelveLabs

Software Development

San Francisco, California 16,055 followers

Building the world's most powerful video understanding platform.

About us

The world's most powerful video intelligence platform for enterprises.

Website
http://www.twelvelabs.io
Industry
Software Development
Company size
11-50 employees
Headquarters
San Francisco, California
Type
Privately Held
Founded
2021

Locations

Employees at TwelveLabs

Updates

  • We are excited to announce a brand-new integration that transforms how media and entertainment organizations manage video content at scale. TwelveLabs has partnered with Frame.io to bring multimodal video intelligence directly into the collaborative review platform that creative teams already trust and rely on daily. This integration addresses critical pain points across the entire content lifecycle—from production and post-production through compliance and distribution—by embedding advanced video intelligence capabilities seamlessly into existing workflows without disrupting established practices or requiring teams to learn new tools. 🖼️ At the heart of this integration are four powerful capabilities that deliver immediate value across M&E workflows ♻️ 1️⃣ Natural language search enables editors to instantly locate specific shots by describing what they need and jump directly to exact timestamps without manually scrubbing through hours of footage. 2️⃣ Automated metadata generation populates Frame.io's custom fields with rich descriptions, thematic tags, and narrative summaries the moment assets are uploaded, ensuring consistent tagging practices across thousands of videos without manual effort. 3️⃣ Image-to-video discovery allows teams to upload reference images and find visually similar footage across their entire Frame.io library, unlocking archived content that would otherwise remain undiscovered and accelerating B-roll selection and thematic compilations. 4️⃣ Finally, AI-powered compliance checking analyzes videos against detailed regulatory criteria and flags violations at exact timestamps, writing them directly into Frame.io's comment system alongside creative feedback so compliance reviews no longer create workflow bottlenecks. This isn't about replacing human creativity or judgment—it's about eliminating the tedious manual work that prevents creative teams from focusing on what they do best. Multimodal video understanding transforms content management from a bottleneck into a strategic advantage, enabling organizations to search vast libraries instantly, maintain metadata consistency at scale, ensure compliance automatically, and discover connections between assets that would otherwise remain hidden. 💸 If your organization manages significant video content through Frame.io and wants to explore how this integration can accelerate your workflows, we would love to discuss implementation, configuration, and training tailored to your specific requirements. 😊

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +2
  • At AWS re:Invent 2025, TwelveLabs President and CSO Yoon Kim joins leaders from Anthropic, NVIDIA, and Amazon Web Services (AWS) to discuss AI in National Security and Defense. The conversation will explore how agentic AI can support mission-critical decision-making, tackling key questions around trust, autonomy, and human oversight in high-stakes environments. 📍 Wednesday, Dec 3 | 9:00 AM PST | Encore Conference Center #AWSreinvent #VideoAI #TwelveLabs

    • No alternative text description for this image
  • In the 102nd session of #MultimodalWeekly, we will feature three exciting projects built with the TwelveLabs API from the recent Generative AI in Advertising hackathon! ✅ Daren HuaBaiwen ZhengPeter LuXilun Hu, and Muhammed S. will present Towa - which predicts how different audience segments will respond to video ads before they’re published: https://lnkd.in/gpDg8ijm ✅ Rob KleimanLeonardo Piñeyro, and YU Xirui will present VibePoint - which analyzes viewer reactions and emotional beats in videos to dynamically select and place contextually relevant ads: https://lnkd.in/gJYAmFMD ✅ Krutika Soni and Harshil Patel will present Adonomics - which analyzes video ads to reveal what emotional and creative elements drive performance: https://lnkd.in/g2QjJ-fx Register for the webinar here: https://lnkd.in/gJGtscSH ⬅️

    • No alternative text description for this image
  • View organization page for TwelveLabs

    16,055 followers

    We’re going live at AWS re:Invent 2025! 🎥 Catch our co-founder and Head of GTM, Soyoung Lee, on AWS onAir live on Twitch as she dives into how TwelveLabs is transforming the way developers and enterprises understand video using multimodal AI, powered by Amazon Web Services (AWS). In this live conversation, Soyoung will share: 🔍 The real-world challenges of making video searchable and understandable at scale ⚡ How teams are cutting content workflows from 16 hours to just 9 minutes ☁️ How AWS enables rapid iteration, massive scale, and easier enterprise adoption 🚀 Where video AI is headed next: from sports & entertainment to healthcare, education, and beyond Tune in for an unscripted, practical look at how video AI is being built today: https://lnkd.in/gSiJVbTr 📍 Thursday, Dec 4 | 2:00 PM PST #AWSreinvent #VideoAI #TwelveLabs

    This content isn’t available here

    Access this content and more in the LinkedIn app

  • View organization page for TwelveLabs

    16,055 followers

    We’re excited to share that Jae Lee, co-founder and CEO of TwelveLabs, will be taking the stage at AWS re:Invent 2025 as part of Peter Desantis' Infrastructure keynote on S3 Vectors, to share our journey from early days building in Seoul to enabling organizations worldwide to build with video-native AI. Jae will discuss key inflection points in TwelveLabs’ growth, our collaboration with Amazon Web Services (AWS), and what’s ahead as we continue to make video searchable, understandable, and actionable. 📍 Thursday, Dec 4 | 9:30 AM PST | Venetian, Hall D, Main Stage Join Jae’s keynote and meet us at booth 1922. #AWSreinvent #VideoAI #TwelveLabs

    • No alternative text description for this image
  • The hype surrounding Multimodal Large Language Models (MLLMs) often obscures a critical reality: when applied to sophisticated video analysis, general-purpose models demonstrate significant limitations. To address this lack of transparency, TwelveLabs developed the open-source Video-to-Text Arena for side-by-side performance comparison. Our recent analysis, detailed in a new guide, reveals fundamental shortcomings in popular MLLMs and highlights the need for purpose-built video language models to achieve enterprise-grade accuracy. ⬇️ Our findings reveal four consistent challenges facing current general-purpose models (such as GPT-4o, Gemini 2.5 Pro, and AWS Nova). 👀 These models frequently exhibit a Temporal Reasoning Deficit, struggling to maintain coherent context across long videos due to aggressive frame sampling. 👀 They also demonstrate significant issues with Spatial Fidelity and Tracking Failure, particularly in accurately reading on-screen text and maintaining object identity. 👀 Compounding this is the Hallucination Problem, where models generate fabricated or over-generalized details to compensate for insufficient visual data compression, and Computational Bottlenecks that force a trade-off between video duration and accuracy. Rigorous qualitative testing within the Arena starkly contrasts the performance of these general models against our specialized Pegasus 1.2 model, particularly in tasks demanding true spatio-temporal comprehension. 🏇 For instance, in complex analyses like the "Five Forces" breakdown and the "EU AI Act" summarization, Pegasus 1.2 consistently delivered conceptually structured, causally coherent explanations with accurate timestamps, while competitors offered fragmented or superficial scene descriptions. 🏇 Furthermore, Pegasus proved superior in maintaining Object and Entity Fidelity and demonstrated specialized capacity for multimodal interpretation, including accurate, segment-aligned transcription of varied dialogue. Pegasus 1.2 was specifically engineered to overcome the limitations exposed in the Arena. By processing videos up to an hour in length with low latency and high accuracy, the model enables a new generation of enterprise applications. These capabilities include automated, high-precision content summarization, timestamp-accurate event identification, and highly reliable content moderation, all crucial for professional video workflows. 💼 We encourage the AI and MLLM community to review our comprehensive guide and explore the Video-to-Text Arena. The data clearly demonstrates that specialization is the definitive path to unlocking the full potential of video understanding AI for real-world business applications. ➕

  • In the 101st session of #MultimodalWeekly, we feature three exciting projects built with the TwelveLabs API from the recent Generative AI in Advertising hackathon! Daniel Gildenbrand will present MomentMatch AI - an intelligent video ad placement platform that analyzes video content to identify optimal moments for advertisement insertion. Using advanced models from TwelveLabs, it provides context-aware, emotion-driven ad recommendations that maximize engagement and ROI: https://momentmatcher.vercel.app/ Danil Meresenschi and Dylan Robinson will present PulsePoint - which helps marketing teams feel the cultural and emotional pulse of their audiences and act on it: https://devpost.com/software/pulsepoint-wz39uo David `DC´ Collier and Milo Chao will present Story Insights - which is a testing methodology that finally answers why creative underperforms, with the specificity required to improve it: https://sentivid.dc-ed5.workers.dev/ Register for the webinar here: https://mailchi.mp/twelvelabs/multimodalweekly

    • No alternative text description for this image
  • TwelveLabs reposted this

    View profile for Corey Petrulich

    ✦ Trusted Partner ✦ Customer Obsessed Cloud Expert ✦ Helping Media & Entertainment businesses transform, accelerate, and revolutionize the industry 👈

    🚨 There's still time to register, but only a few spots left! ✅ Work in M&E? ✅ Need to do more with less? ✅ Want to see Agentic AI in action? My team at Wizeline is hosting a session this Thursday, Nov. 13th (that's tomorrow!) with #AWS and TwelveLabs. If you're challenged to "do more with less" in #sports media, join us. We're showing how #Agentic AI automates content from ingest to publish to maximize profit. It's a key strategy session. 👉 Register here: https://lnkd.in/ePqPD6Y8 #AI #Wizeline #LiveSports #MediaEntertainment

  • We're proud to announce our strategic partnership with LIG Nex1 to bring multimodal video understanding AI to public sector and aerospace applications in Korea. LIG Nex1 leads Korea's aerospace and technology sector, with world-class capabilities in mission-critical systems, C4I, surveillance, and aerospace electronics. Through this MOU, we will be collaborating on applying our video understanding technology to enhance intelligent systems, video analysis, and decision-support solutions for the public sector. We'll be exploring joint research and development opportunities that leverage large-scale video intelligence for next-generation public sector and aerospace innovation together. Stay tuned for more exciting updates ahead! #AI #VideoUnderstanding #MultimodalAI #PublicSector #Aerospace #TwelveLabs #VideoAI

    • No alternative text description for this image
  • The creator economy is booming, but the process of finding authentic, impactful brand-creator partnerships remains riddled with manual, subjective inefficiencies. At TwelveLabs, advanced video understanding is the key to solving this critical challenge. We are releasing a tutorial that demonstrates how to build a powerful, full-stack Creator Discovery Platform. This application is a masterclass in leveraging the TwelveLabs Analyze, Embed, and Search APIs to move beyond keyword matching and vanity metrics, delivering measurable, intelligent connections.👇 This tutorial showcases three major features that redefine creator-brand alignment: ✨1 - Creator Brand Match: Intelligent Partnership Discovery✨ Stop relying on follower counts and basic demographics. Our platform uses hybrid embedding search—combining text and visual semantic understanding—to match brands with creators whose content, tone, and visual aesthetic genuinely align. This is powered by the Embed API and allows for similarity search based on conceptual meaning, ensuring you find the hidden gems and most authentic partnerships for optimal ROI.🙅♀️ ✨2 - Brand Mention Detection: Measuring True Brand Impact✨ Visibility is key, but context is everything. Our application uses the Analyze API with sophisticated prompting to automatically detect, track, and analyze every brand mention, logo appearance, and product placement with frame-accurate timing. The result is an interactive visual heatmap that quantifies exposure quality and duration, offering unparalleled transparency into partnership performance—far beyond simple views or likes.💪 ✨3 - Semantic Search: Beyond Keywords to True Understanding✨ Our system leverages the Search API to unlock powerful multi-modal search capabilities. Users can find precise moments using natural language queries (e.g., "skincare routine for sensitive skin") or by uploading an image to find visually similar content. This level of semantic understanding transforms massive video libraries into instantly searchable databases, dramatically improving efficiency for marketing, media, and creative teams.🚀 This platform is more than just a proof-of-concept for influencer marketing; it’s a blueprint for any organization looking to extract measurable, actionable insights from video. Whether you're building content recommendation systems, refining media archives, or optimizing e-commerce product discovery, the core video intelligence capabilities demonstrated here are universally applicable. 🔗 Dive into the full tutorial today (link in comments) to see the technical architecture, code walkthroughs, and end-to-end workflow of how these features integrate with a vector database (like Pinecone) and a modern web framework.

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +1

Similar pages

Browse jobs

Funding