Startups

Twelve Labs makes searching inside videos simple and powerful, propelled by $5M seed round

Comment

Illustration of a magnifying glass over several video windows.
Image Credits: Bryce Durbin / TechCrunch

With video making up more and more of the media we interact with and create daily, there’s also a growing need to track and index that content. What meeting or seminar was it where I asked that question? Which lecture had the part about tax policies? Twelve Labs has a machine learning solution for summarizing and searching video that could make quicker and easier work for both consumers and creators.

The capability the startup provides is being able to put in a complex yet vague query like “the office party where Courtney sang the national anthem” and instantly get not just the video but the moment in the video where it happens. “Ctrl-F for video” is how they put it. (That’s command-F for our friends on Macs.)

You might think “but wait, I can search for videos right now!” And yes, on YouTube or in a university archive you can often find the video you want. But what happens then? You scrub through the video looking for the part you were looking for, or scroll through the transcript trying to think of the exact way they phrased something.

This is because when you search video, you’re really searching for tags, descriptions and other basic elements that can be easily added at scale. There’s some algorithmic magic to surfacing the video you want, but the system doesn’t really understand the video itself.

“The industry has over-simplified the problem, thinking tags can solve search,” said Twelve Labs founder and CEO Jae Lee. And many solutions now do rely on, for example, recognizing that some frames of the video contain cats, so it adds the tag #cats. “But video isn’t just a series of images — it’s complex data. We knew we needed to build a new neural network that can take in both visuals and audio and formulate context around that; it’s called multimodal understanding.”

That’s a hot phrase in AI right now, because we seem to be reaching limits in how well an AI system can understand the world when it’s narrowly focused on one “sense,” like audio or a still image. For example, Facebook recently found that it needed an AI that paid attention to both the imagery and text in a post simultaneously to detect misinformation and hate speech.

With video, your understanding will be limited if you’re looking at individual frames and trying to draw associations with a timestamped transcript. When people watch a video, they naturally fuse the video and audio information into personas, actions, intentions, cause and effect, interactions and other more sophisticated concepts.

Twelve Labs claims to have built something along these lines with its video understanding system. Lee explained that the AI was trained to approach video from a multimodal perspective, associating audio and video from the start and creating what they say is a much richer understanding of it.

Animation showing a sample query of a video database. Image Credits: Twelve Labs

“We include more complex information, like relationships between items in the frame, connecting the past and present, and this makes it possible to do complex queries,” he said. “Just for example, if there’s a YouTuber, and they search ‘Mr Beast challenges Joey Chestnut to eat a burger,’ it will understand the concept of challenging someone, and of talking about a challenge.”

Sure, Mr Beast — a professional — may have put that particular datum in the title or tags, but what if it’s just part of a regular vlog or a series of challenges? What if Mr Beast was tired that day and didn’t fill in all the metadata correctly? What if there are a dozen burger challenges, or a thousand, and the video search can’t tell the difference between Joey Chestnut and Josie Acorn? As long as you’re leaning on a superficial understanding of the content, there are plenty of ways that it can fail you. If you’re a corporation looking to make 10,000 videos searchable, you want something better — and way less labor intensive — than what’s out there.

Twelve Labs built its tool into a simple API that can be called to index a video (or a thousand) and generate a rich summary and connect it to a chosen graph. So if you record all-hands meetings or skill-share seminars or weekly brainstorming sessions, those become searchable not just by time or attendees, but by who talks, when, about what, and including other actions like drawing a diagram or showing slides.

“We’ve seen companies with lots of organizational data interested in finding out when the CEO is talking about or presenting a certain concept,” Lee said. “We’ve been working very deliberately with folks to gather data points and interesting use cases — we’re seeing lots of them.”

Simulation of a Twelve Labs search within videos.
Image Credits: Twelve Labs

A side effect of processing a video for search and, as a consequence, understanding what happens in it, is the ability to generate summaries and captions. This is another area where things could be improved. Auto-generated captions vary widely in quality, of course, as well as the ability to search them, attach them to people and situations in the video and other more complex capabilities. And summary is a field that’s taking off everywhere — not just because no one has enough time to watch everything, but because a high-level summary is valuable for everything from accessibility to archival purposes.

Importantly, the API can be fine-tuned to better work with the corpus it’s being unleashed on. For instance, if there’s a lot of jargon or a few unfamiliar situations, it can be trained up to work just as well with those as it would with more commonplace situations like boardrooms and standard business talk (whatever that is). And that’s before you start getting into things like college lectures, security footage, cooking…

Mockup of API for fine tuning the model to work better with salad-related content. Image Credits: Twelve Labs

On that note, the company is very much a proponent of the “big network” style of machine learning. Making an AI model that can understand such complex data and produce such a variety of results means it’s a large and computationally intense one to train and deploy. But that’s what’s needed for this problem, Lee said.

“We’re a big believer in large neural networks, but we don’t just increase parameter size,” he said. “It still has multi-billion parameters, but we’ve done a lot of technical kung fu to make it efficient. We do things like not look at every frame — a light algorithm identifies important frames, things like that. There’s still a lot of science yet to happen in language understanding and the multimodal space. But the purpose of a large network is to learn the statistical representation of the data that’s been fed into it, and that concept we’re a huge believer in.”

Though Twelve Labs hopes to help index much of the video out there, you as a user probably won’t be aware of it; aside from a developer playground, there’s no Twelve Labs web platform that lets you search stuff. The API is meant to be integrated into existing tech stacks so that wherever you normally would search through videos, you still will — but the results will be way better. (They’ve shown this in benchmarks where the API smokes other models.)

Although it’s fairly certain that companies like Google, Netflix and Amazon are working on exactly this sort of video understanding model, Lee didn’t seem bothered. “If history is any indicator, at large companies like YouTube and TikTok the search is very specific to their platform and very core to their business,” he said. “We’re not worried about them ripping out their core tech and serving it to potential customers. Most of our beta partners have tried these big companies’ so-called solutions and then came to us.”

The company has raised a $5 million seed round to take it from beta to market; Index Ventures led the round, with Radical Ventures, Expa and Techstars Seattle participating, plus angels including Stanford’s AI leader Fei-Fei Li, Scale AI CEO Alex Wang, Patreon CEO Jack Conte and Oren Etzioni of AI2.

The plan from here is to build out the features that have proven most useful to beta partners, then debut as an open service in the near future.

More TechCrunch

French AI startup Mistral is introducing new AI model customization options, including paid plans, to let developers — and enterprises — fine-tune its generative models for particular use cases. The…

Mistral launches new services, SDK to let customers fine-tune its models

The warning for the Ai Pin was issued “out of an abundance of caution,” according to Humane.

Humane urges customers to stop using charging case, citing battery fire concerns

The keynote will be focused on Apple’s software offerings and the developers that power them, including the latest versions of iOS, iPadOS, macOS, tvOS, visionOS and watchOS.

Watch Apple kick off WWDC 2024 right here

As WWDC 2024 nears, all sorts of rumors and leaks have emerged about what iOS 18 and its AI-powered apps and features have in store.

What to expect from Apple’s AI-powered iOS 18 at WWDC 2024

Welcome to Elon Musk’s X. The social network formerly known as Twitter where the rules are made up and the check marks don’t matter. Or do they? The Tesla and…

Elon Musk’s X: A complete timeline of what Twitter has become

TechCrunch has kept readers informed regarding Fearless Fund’s courtroom battle to provide business grants to Black women. Today, we are happy to announce that Fearless Fund CEO and co-founder Arian…

Fearless Fund’s Arian Simone coming to Disrupt 2024

Bridgy Fed is one of the efforts aimed at connecting the fediverse with the web, Bluesky and, perhaps later, other networks like Nostr.

Bluesky and Mastodon users can now talk to each other with Bridgy Fed

Zoox, Amazon’s self-driving unit, is bringing its autonomous vehicles to more cities.  The self-driving technology company announced Wednesday plans to begin testing in Austin and Miami this summer. The two…

Zoox to test self-driving cars in Austin and Miami 

Called Stable Audio Open, the generative model takes a text description and outputs a recording up to 47 seconds in length.

Stability AI releases a sound generator

It’s not just instant-delivery startups that are struggling. Oda, the Norway-based online supermarket delivery startup, has confirmed layoffs of 150 jobs as it drastically scales back its expansion ambitions to…

SoftBank-backed grocery startup Oda lays off 150, resets focus on Norway and Sweden

Newsletter platform Substack is introducing the ability for writers to send videos to their subscribers via Chat, its private community feature, the company announced on Wednesday. The rollout of video…

Substack brings video to its Chat feature

Hiya, folks, and welcome to TechCrunch’s inaugural AI newsletter. It’s truly a thrill to type those words — this one’s been long in the making, and we’re excited to finally…

This Week in AI: Ex-OpenAI staff call for safety and transparency

Ms. Rachel isn’t a household name, but if you spend a lot of time with toddlers, she might as well be a rockstar. She’s like Steve from Blues Clues for…

Cameo fumbles on Ms. Rachel fundraiser as fans receive credits instead of videos  

Cartwheel helps animators go from zero to basic movement, so creating a scene or character with elementary motions like taking a step, swatting a fly or sitting down is easier.

Cartwheel generates 3D animations from scratch to power up creators

The new tool, which is set to arrive in Wix’s app builder tool this week, guides users through a chatbot-like interface to understand the goals, intent and aesthetic of their…

Wix’s new tool taps AI to generate smartphone apps

ClickUp Knowledge Management combines a new wiki-like editor and with a new AI system that can also bring in data from Google Drive, Dropbox, Confluence, Figma and other sources.

ClickUp wants to take on Notion and Confluence with its new AI-based Knowledge Base

New York City, home to over 60,000 gig delivery workers, has been cracking down on cheap, uncertified e-bikes that have resulted in battery fires across the city.  Some e-bike providers…

Whizz wants to own the delivery e-bike subscription space, starting with NYC

This is the last major step before Starliner can be certified as an operational crew system, and the first Starliner mission is expected to launch in 2025. 

Boeing’s Starliner astronaut capsule is en route to the ISS 

TechCrunch Disrupt 2024 in San Francisco is the must-attend event for startup founders aiming to make their mark in the tech world. This year, founders have three exciting ways to…

Three ways founders can shine at TechCrunch Disrupt 2024

Google’s newest startup program, announced on Wednesday, aims to bring AI technology to the public sector. The newly launched “Google for Startups AI Academy: American Infrastructure” will offer participants hands-on…

Google’s new startup program focuses on bringing AI to public infrastructure

eBay’s newest AI feature allows sellers to replace image backgrounds with AI-generated backdrops. The tool is now available for iOS users in the U.S., U.K., and Germany. It’ll gradually roll…

eBay debuts AI-powered background tool to enhance product images

If you’re anything like me, you’ve tried every to-do list app and productivity system, only to find yourself giving up sooner than later because sooner than later, managing your productivity…

Hoop uses AI to automatically manage your to-do list

Asana is using its work graph to train LLMs with the goal of creating AI assistants that work alongside human employees in company workflows.

Asana introduces ‘AI teammates’ designed to work alongside human employees

Taloflow, an early stage startup changing the way companies evaluate and select software, has raised $1.3M in a seed round.

Taloflow puts AI to work on software vendor selection to reduce costs and save time

The startup is hoping its durable filters can make metals refining and battery recycling more efficient, too.

SiTration uses silicon wafers to reclaim critical minerals from mining waste

Spun out of Bosch, Dive wants to change how manufacturers use computer simulations by both using modern mathematical approaches and cloud computing.

Dive goes cloud-native for its computational fluid dynamics simulation service

The tension between incumbents and fintechs has existed for decades. But every once in a while, the two groups decide to put their competition aside and work together. In an…

When foes become friends: Capital One partners with fintech giants Stripe, Adyen to prevent fraud

After growing 500% year-over-year in the past year, Understory is now launching a product focused on the renewable energy sector.

Insurance provider Understory gets into renewable energy following $15M Series A

Ashkenazi will start her new role at Google’s parent company on July 31, after 23 years at Eli Lilly.

Alphabet brings on Eli Lilly’s Anat Ashkenazi as CFO

Tobiko aims to reimagine how teams work with data by offering a dbt-compatible data transformation platform.

With $21.8M in funding, Tobiko aims to build a modern data platform