📌 2024 Year in Review at Twelve Labs What a year it’s been at Twelve Labs! Our journey through 2024 has been filled with meaningful moments. Here are some highlights we're grateful to share: 🌏 Global Collaboration Our teams from San Francisco and Seoul came together at the Seoul Summit, sharing insights and building stronger connections. The synergy and exchange of ideas set a fantastic tone for the year, fueling innovations and strengthening our global operations. 🌟 Team Growth We've been fortunate to welcome 35 new talented individuals to the Twelve Labs family this year! Each person brings unique expertise and fresh perspectives, driving us closer to our goal of transforming how the world interacts with video content. 🏆 Achievements and Innovations 2024 has been a groundbreaking year for Twelve Labs, culminating in the launch of Marengo 2.7. This latest release is a testament to our team's hard work and dedication, pushing the boundaries of video AI to set new industry standards. We're thrilled by the enthusiastic reception from our partners and customers, validating the impact and value of our advanced models. Besides that, our launch of API version 1.3 and the Embedding API, broadening the scope and applicability of our technologies, as well as the introduction of Indexing 2.0 and Pegasus 1.1, further refined our capabilities to meet diverse enterprise needs. We’re SOC-2 compliant, too! 🔗 Community and Industry Engagement From participating in major conferences to hosting webinars and workshops, we've made meaningful impacts and contributed to critical dialogues around AI and technology in the media and entertainment sectors. NAB Show in May was a major turning point, followed by amazing moments at #IBC, SVG summits, Amazon Web Services (AWS) re:Invent, NeurIPS, and many others! As we gear up for another great year, we are immensely grateful for our investors’ continued robust support. And yes- we’re growing our team! Explore opportunities to be part of a team that’s setting the pace in AI advancements. https://lnkd.in/ggc-mYa8 #VideoAI #2024Recap #YearInReview
Twelve Labs
Software Development
San Francisco, California 8,644 followers
Help developers build programs that can see, listen, and understand the world as we do.
About us
Helping developers build programs that can see, hear, and understand the world as we do by giving them the world's most powerful video-understanding infrastructure.
- Website
-
http://www.twelvelabs.io
External link for Twelve Labs
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- San Francisco, California
- Type
- Privately Held
- Founded
- 2021
Locations
-
Primary
55 Green St
San Francisco, California 94111, US
Employees at Twelve Labs
Updates
-
As #NeurIPS2024 wraps up, we at Twelve Labs are energized by an incredible week at one of the world's leading AI conferences in beautiful Vancouver. From December 10-12, our booth buzzed with conversations as attendees explored our latest breakthroughs in AI and machine learning. We were inspired by every discussion, question, and collaborative idea shared! A standout moment was hosting the first-ever workshop on video language models on December 14th. Together with leading researchers and industry innovators, we explored the frontiers of video AI and its transformative potential. The packed session and dynamic discussions exceeded our expectations—thank you to everyone who joined and contributed! Swipe through to relive some of our favorite moments. From insightful discussions at our booth to the engaging interactions during our workshop and the lively atmosphere at the happy hour, every moment was a step forward in our journey to lead in AI innovation. #VideoAI
-
🎥 ~ NEW WEBINAR ~ Just dropped! Dive into the Twelve Labs Playground with Maninder Saini and Sue Kim as they uncover the secrets of effective VFM prompting. What's inside: 🔍 Video Search & Generate live demo ✨ Expert prompting techniques 🚀 Hands-on Playground walkthrough Watch now: https://lnkd.in/gUETvQ2E 📺 #TwelveLabs #VideoAI #AITutorial
Intro to Playground and Prompting | Multimodal Weekly 64
https://www.youtube.com/
-
Twelve Labs reposted this
We are thrilled to announce a significant milestone in our journey—a strategic investment of $30M from global leaders like Databricks, SK Telecom, Snowflake, HubSpot Ventures, and IQT (In-Q-Tel). This funding underscores the transformative impact and value of our advanced video understanding technology in the AI ecosystem- especially to end customers across the media and entertainment space. 🌟 With this investment, we're excited to welcome Yoon Kim as our new President and Chief Strategy Officer. Yoon brings a wealth of experience from SK Telecom and Apple, where he was integral to the development of Siri. His expertise will be invaluable as we continue to innovate and expand our technological capabilities and market reach. 🤝 "Companies like OpenAI and Google are investing heavily in general-purpose multimodal models. But these models aren’t optimized for video. Our differentiation lies in being video-first from the beginning … We believe video is deserving of our sole focus — it’s not an add-on,” said our very own Jae Lee. 🔗 This partnership with Databricks and Snowflake will enhance our offerings, enabling seamless integration with their robust vector databases and opening up new possibilities for enterprise video applications. 🌐 As we embark on this next phase of growth, we're keen to keep pushing the boundaries of what's possible in AI and video understanding. Join us on this exciting journey and see where innovation takes us next! https://lnkd.in/gU7Xvi36
-
We are thrilled to announce a significant milestone in our journey—a strategic investment of $30M from global leaders like Databricks, SK Telecom, Snowflake, HubSpot Ventures, and IQT (In-Q-Tel). This funding underscores the transformative impact and value of our advanced video understanding technology in the AI ecosystem- especially to end customers across the media and entertainment space. 🌟 With this investment, we're excited to welcome Yoon Kim as our new President and Chief Strategy Officer. Yoon brings a wealth of experience from SK Telecom and Apple, where he was integral to the development of Siri. His expertise will be invaluable as we continue to innovate and expand our technological capabilities and market reach. 🤝 "Companies like OpenAI and Google are investing heavily in general-purpose multimodal models. But these models aren’t optimized for video. Our differentiation lies in being video-first from the beginning … We believe video is deserving of our sole focus — it’s not an add-on,” said our very own Jae Lee. 🔗 This partnership with Databricks and Snowflake will enhance our offerings, enabling seamless integration with their robust vector databases and opening up new possibilities for enterprise video applications. 🌐 As we embark on this next phase of growth, we're keen to keep pushing the boundaries of what's possible in AI and video understanding. Join us on this exciting journey and see where innovation takes us next! https://lnkd.in/gU7Xvi36
-
~ New Webinar ~ The webinar recording with Siyuan Li, Son Jaewon, and Jinwoo Ahn is up! Watch here: https://lnkd.in/g4QQqz4h 📺 They discussed: - Matching Anything By Segmenting Anything - CNN-based Spatiotemporal Attention for Video Summarization - Compositional Video Understanding Enjoy!
Video summarization, Compositional video understanding, & Tracking everything | Multimodal Weekly 63
https://www.youtube.com/
-
🚀 Exciting News! Twelve Labs is heading to #NeurIPS2024 in Vancouver! 🌟 We’re happy to announce our participation as a Gold Sponsor at NeurIPS, the premier global conference for groundbreaking AI and machine learning research. Don’t miss this opportunity to connect with our team at our booth to explore these innovations firsthand. 📍 Visit Us at Our Booth: December 10: 12 PM - 8 PM December 11: 9 AM - 5 PM December 12: 9 AM - 4 PM 🎉 First-Ever Video Language Models Workshop 🎥 But that’s not all! We’re proud to host the first-ever workshop on video language models at NeurIPS on Dec 14 that explores the cutting edge advancements of AI in this field. Join us in East Meeting Room 13 where we'll dive into the latest innovations in video AI. Register for the workshop here: https://lnkd.in/gZ5MF235 and learn more about our participation: https://lnkd.in/gZDm4wv7 #VideoAI #VideoLanguageModels #TwelveLabs
-
🔬 We are excited to announce Marengo 2.7: A breakthrough in video understanding powered by our innovative multi-vector embedding architecture! 🎯 What makes Marengo 2.7 special? We've come up with a novel approach that represents videos using multiple specialized vectors, each capturing distinct aspects of visual, audio, and semantic content. This multi-vector representation enables unprecedented precision in video search and understanding. 📊 The quantitative evaluation results speak for themselves: - 76.0% average recall in general text to visual search (13% advantage over external SOTA) - 78.1% average recall in motion search (22.5% improvement from the previous version) - 77.0% average recall in OCR search (13.4% higher than current SOTA) - 90.6% average recall in image object search (32.6% improvement from the previous version) - 56% mean average precision in image logo search (19.2% above current SOTA) - 57.7% average recall in general audio search across benchmark datasets 🧪What sets us apart? Our commitment to rigorous evaluation. We've tested Marengo 2.7 against 60+ multimodal retrieval datasets - the most comprehensive evaluation framework in the industry. This extensive testing ensures our model performs consistently across diverse real-world scenarios. 💪 Real-world performance highlights: - Sophisticated understanding of complex events like detailed sports plays and sequential visual elements in urban settings - Impressive logo and object detection even with low-resolution images and challenging viewing angles - Advanced audio comprehension spanning speech content and instrument recognition 🔍 Areas we're actively improving: While Marengo 2.7 excels in many areas, we're transparent about the current challenges we're addressing, including complex scene understanding with multiple simultaneous events, highly compositional queries, and performance with heavily accented speech or text in unusual orientations. These represent exciting opportunities for our next innovations in video understanding technology. 👓 Read the complete technical blog post here: https://lnkd.in/gByqiCPY 🎮 Experience the future of video understanding via our Playground: https://lnkd.in/g3cpYr77 👩💻 Integrate our Search API into your applications: https://lnkd.in/gSaZUxhz
-
🏆 Exciting news! Our co-founder and CTO, Aiden L., has been named to the Forbes 30 Under 30 list in AI for North America, 2025! 🚀 This honor highlights not just individual brilliance, but the groundbreaking work and vision Aiden and our entire team are committed to in the AI domain. We're pioneering video understanding. Imagine having a "Ctrl+F" for video content- searching through archives as easily as you'd search a document. That's the powerful technology Aiden and our team are building, transforming how developers and enterprises interact with video at scale. This recognition is more than an award—it's a testament to our commitment to pushing the boundaries of AI and video technology. And trust us, we're just getting started. 💪 A huge congratulations to Aiden and a heartfelt thank you to the entire Twelve Labs team! #ForbesUnder30 #VideoAI https://lnkd.in/edJd9Gvu
-
Excited to announce that Twelve Labs will be at Amazon Web Services (AWS) re:Invent in Las Vegas, from December 2-6! Join us for a week of innovation as we dive into the latest in generative AI, witness new product launches, and learn from industry leaders. Don’t miss out on meeting our team, see live demos, and discuss how our technology is shaping the future of AI in media. 🚀 📅 Book a meeting with us: https://lnkd.in/gjcu9n_q #AWSreInvent #VideoAI