Startups

Runway’s Gen-2 shows the limitations of today’s text-to-video tech

Comment

Runway Gen-2
Image Credits: Runway

In a recent panel interview with Collider, Joe Russo, the director of tentpole Marvel films like “Avengers: Endgame,” predicted that within two years, AI will be able to create a fully fledged movie. I’d say that’s a rather optimistic timeline. But we’re getting closer.

This week, Runway, a Google-backed AI startup that helped develop the AI image generator Stable Diffusion, released Gen-2, a model that generates videos from text prompts or an existing image. (Gen-2 was previously in limited, waitlisted access.) The follow-up to Runway’s Gen-1 model launched in February, Gen-2 is one of the first commercially available text-to-video models.

“Commercially available” is an important distinction. Text-to-video, being the logical next frontier in generative AI after images and text, is becoming a bigger area of focus particularly among tech giants, several of which have demoed text-to-video models over the past year. But those models remain firmly in the research stages, inaccessible to all but a select few data scientists and engineers.

Of course, first isn’t necessarily better.

Out of personal curiosity and service to you, dear readers, I ran a few prompts through Gen-2 to get a sense of what the model can — and can’t — accomplish. (Runway’s currently providing around 100 seconds of free video generation.) There wasn’t much of a method to my madness, but I tried to capture a range of angles, genres and styles that a director, professional or armchair, might like to see on the silver screen — or a laptop as the case might be.

One limitation of Gen-2 that became immediately apparent is the framerate of the four-second-long videos the model generates. It’s quite low and noticeably so, to the point where it’s nearly slideshow-like in places.

Runway Gen-2
Image Credits: Runway

What’s unclear is whether that’s a problem with the tech or an attempt by Runway to save on compute costs. In any case, it makes Gen-2 a rather unattractive proposition off the bat for editors hoping to avoid post-production work.

Beyond the framerate issue, I’ve found that Gen-2-generated clips tend to share a certain graininess or fuzziness in common, as if they’ve had some sort of old-timey Instagram filter applied. Other artifacting occurs as well in places, like pixelation around objects when the “camera” (for lack of a better word) circles them or quickly zooms toward them.

As with many generative models, Gen-2 isn’t particularly consistent with respect to physics or anatomy, either. Like something conjured up by a surrealist, people’s arms and legs in Gen-2-produced videos meld together and come apart again while objects melt into the floor and disappear, their reflections warped and distorted. And — depending on the prompt — faces can appear doll-like, with glossy, emotionless eyes and pasty skin that evokes a cheap plastic.

Runway Gen-2
Image Credits: Runway

To pile on higher, there’s the content issue. Gen-2 seems to have a tough time understanding nuance, clinging to particular descriptors in prompts while ignoring others, seemingly at random.

Runway Gen-2
Image Credits: Runway

One of the prompts I tried — “A video of an underwater utopia, shot on an old camera, in the style of a ‘found footage’ film” — brought about no such utopia, only what looked like a first-person scuba dive through an anonymous coral reef. Gen-2 struggled with my other prompts, too, failing to generate a zoom-in shot for a prompt specifically calling for a “slow zoom” and not quite nailing the look of your average astronaut.

Could the issues lie with Gen-2’s training data set? Perhaps.

Gen-2, like Stable Diffusion, is a diffusion model, meaning it learns how to gradually subtract noise from a starting image made entirely of noise to move it closer, step by step, to the prompt. Diffusion models learn through training on millions to billions of examples; in an academic paper detailing Gen-2’s architecture, Runway says the model was trained on an internal data set of 240 million images and 6.4 million video clips.

Diversity in the examples is key. If the data set doesn’t contain much footage of, say, animation, the model — lacking points of reference — won’t be able to generate reasonable-quality animations. (Of course, animation being a broad field, even if the data set did have clips of anime or hand-drawn animation, the model wouldn’t necessarily generalize well to all types of animation.)

Runway Gen-2
Image Credits: Runway

On the plus side, Gen-2 passes a surface-level bias test. While generative AI models like DALL-E 2 have been found to reinforce societal biases, generating images of positions of authority — like “CEO or “director” — that depict mostly white men, Gen-2 was the tiniest bit more diverse in the content it generated — at least in my testing.

Runway Gen-2
Image Credits: Runway

Fed the prompt “A video of a CEO walking into a conference room,” Gen-2 generated a video of men and women (albeit more men than women) seated around something like a conference table. The output for the prompt “A video of a doctor working in an office,” meanwhile, depicts a woman doctor vaguely Asian in appearance behind a desk.

Results for any prompt containing the word “nurse” were less promising, though, consistently showing young white women. Ditto for the phrase “a person waiting tables.” Evidently, there’s work to be done.

The takeaway from all this, for me, is that Gen-2 is more a novelty or toy than a genuinely useful tool in any video workflow. Could the outputs be edited into something more coherent? Perhaps. But depending on the video, it’d require potentially more work than shooting footage in the first place.

That’s not to be too dismissive of the tech. It’s impressive what Runway’s done, here, effectively beating tech giants to the text-to-video punch. And I’m sure some users will find uses for Gen-2 that don’t require photorealism — or a lot of customizability. (Runway CEO Cristóbal Valenzuela recently told Bloomberg that he sees Gen-2 as a way to offer artists and designers a tool that can help them with their creative processes.)

Runway Gen-2
Image Credits: Runway

I did myself. Gen-2 can indeed understand a range of styles, like anime and claymation, which lend themselves to the lower framerate. With a little fiddling and editing work, it wouldn’t be impossible to string together a few clips to create a narrative piece.

Lest the potential for deepfakes concern you, Runway says it’s using a combination of AI and human moderation to prevent users from generating videos that include pornography or violent content or that violate copyrights. I can confirm there’s a content filter — an overzealous one in point of fact. But of course, those aren’t foolproof methods, so we’ll have to see how well they work in practice.

Runway Gen-2
Image Credits: Runway

But at least for now, filmmakers, animators and CGI artists and ethicists can rest easy. It’ll be at least a couple of iterations down the line before Runway’s tech comes close to generating film-quality footage — assuming it ever gets there.

More TechCrunch

Featured Article

Hacked, leaked, exposed: Why you should never use stalkerware apps

Using stalkerware is creepy, unethical, potentially illegal, and puts your data and that of your loved ones in danger.

39 mins ago
Hacked, leaked, exposed: Why you should never use stalkerware apps

The design brief was simple: each grind and dry cycle had to be completed before breakfast. Here’s how Mill made it happen.

Mill’s redesigned food waste bin really is faster and quieter than before

Google is embarrassed about its AI Overviews, too. After a deluge of dunks and memes over the past week, which cracked on the poor quality and outright misinformation that arose…

Google admits its AI Overviews need work, but we’re all helping it beta test

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. In…

Startups Weekly: Musk raises $6B for AI and the fintech dominoes are falling

The product, which ZeroMark calls a “fire control system,” has two components: a small computer that has sensors, like lidar and electro-optical, and a motorized buttstock.

a16z-backed ZeroMark wants to give soldiers guns that don’t miss against drones

The RAW Dating App aims to shake up the dating scheme by shedding the fake, TikTok-ified, heavily filtered photos and replacing them with a more genuine, unvarnished experience. The app…

Pitch Deck Teardown: RAW Dating App’s $3M angel deck

Yes, we’re calling it “ThreadsDeck” now. At least that’s the tag many are using to describe the new user interface for Instagram’s X competitor, Threads, which resembles the column-based format…

‘ThreadsDeck’ arrived just in time for the Trump verdict

Japanese crypto exchange DMM Bitcoin confirmed on Friday that it had been the victim of a hack resulting in the theft of 4,502.9 bitcoin, or about $305 million.  According to…

Hackers steal $305M from DMM Bitcoin crypto exchange

This is not a drill! Today marks the final day to secure your early-bird tickets for TechCrunch Disrupt 2024 at a significantly reduced rate. At midnight tonight, May 31, ticket…

Disrupt 2024 early-bird prices end at midnight

Instagram is testing a way for creators to experiment with reels without committing to having them displayed on their profiles, giving the social network a possible edge over TikTok and…

Instagram tests ‘trial reels’ that don’t display to a creator’s followers

U.S. federal regulators have requested more information from Zoox, Amazon’s self-driving unit, as part of an investigation into rear-end crash risks posed by unexpected braking. The National Highway Traffic Safety…

Feds tell Zoox to send more info about autonomous vehicles suddenly braking

You thought the hottest rap battle of the summer was between Kendrick Lamar and Drake. You were wrong. It’s between Canva and an enterprise CIO. At its Canva Create event…

Canva’s rap battle is part of a long legacy of Silicon Valley cringe

Voice cloning startup ElevenLabs introduced a new tool for users to generate sound effects through prompts today after announcing the project back in February.

ElevenLabs debuts AI-powered tool to generate sound effects

We caught up with Antler founder and CEO Magnus Grimeland about the startup scene in Asia, the current tech startup trends in the region and investment approaches during the rise…

VC firm Antler’s CEO says Asia presents ‘biggest opportunity’ in the world for growth

Temu is to face Europe’s strictest rules after being designated as a “very large online platform” under the Digital Services Act (DSA).

Chinese e-commerce marketplace Temu faces stricter EU rules as a ‘very large online platform’

Meta has been banned from launching features on Facebook and Instagram that would have collected data on voters in Spain using the social networks ahead of next month’s European Elections.…

Spain bans Meta from launching election features on Facebook, Instagram over privacy fears

Stripe, the world’s most valuable fintech startup, said on Friday that it will temporarily move to an invite-only model for new account sign-ups in India, calling the move “a tough…

Stripe curbs its India ambitions over regulatory situation

The 2024 election is likely to be the first in which faked audio and video of candidates is a serious factor. As campaigns warm up, voters should be aware: voice…

Voice cloning of political figures is still easy as pie

When Alex Ewing was a kid growing up in Purcell, Oklahoma, he knew how close he was to home based on which billboards he could see out the car window.…

OneScreen.ai brings startup ads to billboards and NYC’s subway

SpaceX’s massive Starship rocket could take to the skies for the fourth time on June 5, with the primary objective of evaluating the second stage’s reusable heat shield as the…

SpaceX sent Starship to orbit — the next launch will try to bring it back

Eric Lefkofsky knows the public listing rodeo well and is about to enter it for a fourth time. The serial entrepreneur, whose net worth is estimated at nearly $4 billion,…

Billionaire Groupon founder Eric Lefkofsky is back with another IPO: AI health tech Tempus

TechCrunch Disrupt showcases cutting-edge technology and innovation, and this year’s edition will not disappoint. Among thousands of insightful breakout session submissions for this year’s Audience Choice program, five breakout sessions…

You’ve spoken! Meet the Disrupt 2024 breakout session audience choice winners

Check Point is the latest security vendor to fix a vulnerability in its technology, which it sells to companies to protect their networks.

Zero-day flaw in Check Point VPNs is ‘extremely easy’ to exploit

Though Spotify never shared official numbers, it’s likely that Car Thing underperformed or was just not worth continued investment in today’s tighter economic market.

Spotify offers Car Thing refunds as it faces lawsuit over bricking the streaming device

The studies, by researchers at MIT, Ben-Gurion University, Cambridge and Northeastern, were independently conducted but complement each other well.

Misinformation works, and a handful of social ‘supersharers’ sent 80% of it in 2020

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Okay, okay…

Tesla shareholder sweepstakes and EV layoffs hit Lucid and Fisker

In a series of posts on X on Thursday, Paul Graham, the co-founder of startup accelerator Y Combinator, brushed off claims that OpenAI CEO Sam Altman was pressured to resign…

Paul Graham claims Sam Altman wasn’t fired from Y Combinator

In its three-year history, EthonAI has amassed some fairly high-profile customers including Siemens and chocolate-maker Lindt.

AI manufacturing startup funding is on a tear as Switzerland’s EthonAI raises $16.5M

Don’t miss out: TechCrunch Disrupt early-bird pricing ends in 48 hours! The countdown is on! With only 48 hours left, the early-bird pricing for TechCrunch Disrupt 2024 will end on…

Ticktock! 48 hours left to nab your early-bird tickets for Disrupt 2024

Biotech startup Valar Labs has built a tool that accurately predicts certain treatment outcomes, potentially saving precious time for patients.

Valar Labs debuts AI-powered cancer care prediction tool and secures $22M