Technology

What two years of AI development can tell us about Sora

Remember when AI art generators became widely available in 2022 and suddenly the internet was full of uncanny pictures that were very cool but didn’t look quite right on close inspection? Get ready for that to happen again — but this time for video.

Last week, OpenAI released Sora, a generative AI model that produces videos based on a simple prompt. It’s not available to the public yet, but CEO Sam Altman showed off its capabilities by taking requests on X, formerly known as Twitter. Users replied with short prompts: “a monkey playing chess in a park,” or “a bicycle race on ocean with different animals as athletes.” It’s uncanny, mesmerizing, weird, beautiful — and prompting the usual cycle of commentary.

Some people are making strong claims about Sora’s negative effects, expecting a “wave of disinformation” — but while I (and experts) think future powerful AI systems pose really serious risks, claims that a specific model will bring the disinformation wave upon us have not held up so far.

Others are pointing at Sora’s many flaws as representing fundamental limitations of the technology — which was a mistake when people did it with image generator models and which, I suspect, will be a mistake again. As my colleague A.W. Ohlheiser pointed out, “just as DALL-E and ChatGPT improved over time, so could Sora.”

The predictions, both bullish and bearish, may yet pan out — but the conversation around Sora and generative AI would be more productive if people on all sides took into greater account all the ways in which we’ve been proven wrong these last couple of years.

What DALL-E 2 and Midjourney can teach us about Sora

Two years ago, OpenAI announced DALL-E 2, a model that could produce still images from a text prompt. The high-resolution fantastical images it produced were quickly all over social media, as were the takes on what to think of it: Real art? Fake art? A threat to artists? A tool for artists? A disinformation machine? Two years later, it’s worth a bit of a retrospective if we want our takes on Sora to age better.

DALL-E 2’s release was only a few months ahead of Midjourney and Stable Diffusion, two popular competitors. They each had their strengths and weaknesses. DALL-E 2 did more photorealistic pictures and adhered a little better to prompts; Midjourney was “artsier.” Collectively, they made AI art available at the click of a button to millions.

Much of the societal impact of generative AI then didn’t come directly from DALL-E 2, but from the wave of image models it led. Likewise, we might expect that the important question about Sora isn’t just what Sora can do, but what its imitators and competitors will be able to do.

Many people thought that DALL-E and its competitors heralded a flood of deepfake propaganda and scams that’d threaten our democracy. While we may well see an effect like that some day, those calls now seem to have been premature. The effect of deepfakes on our democracy “always seems just around the corner,” analyst Peter Carlyon wrote in December, noting that most propaganda continues to be of a more boring kind — for example, taking remarks out of context, or images of one conflict shared and mislabeled as being from another.

Presumably at some point this will change, but there should be some humility about claims that Sora will be that change. It doesn’t take deepfakes to lie to people, and they remain an expensive way to do it. (AI generations are relatively cheap, but if you’re going for something specific and convincing, that’s much pricier. A tsunami of deepfakes implies a scale that spammers mostly can’t afford at the moment.)

But the place where it seems most crucial to me to remember the last two years of AI history is when I read criticisms of Sora’s images for being clumsy, stilted, inhuman, or obviously flawed. It’s true, they are. Sora “does not accurately model the physics of many basic interactions,” OpenAI’s research release acknowledges, adding that it has trouble with cause and effect, mixing up left and right, and following a trajectory.

Nearly identical criticisms were, of course, made of DALL-E 2 and Midjourney — at least at first. Early coverage of DALL-E 2 highlighted its incompetencies, from creating horrifying monstrosities whenever you asked for multiple characters in a scene to giving people claws instead of hands. AI experts argued that the inability of AI to handle “compositionality” — or instructions about how to compose the elements of a scene — reflected a shortcoming fundamental to the technology.

In practice, though, models got better at fulfilling highly specific prompts and users got better at prompting, and as a result it’s possible today to create images with complex and detailed scenes. Nearly all of the entertaining deficiencies were corrected in DALL-E 3, released last year, and in the latest updates to Midjourney. Today’s image generators can do hands and crowd scenes fine.

In the time between DALL-E 2 and Sora, AI image generation has gone from a party trick to a massive industry. Many of the things DALL-E 2 couldn’t do, DALL-E 3 could. And if DALL-E 3 couldn’t, a competitor often could. That’s a perspective that’s crucial to keep in mind when you read prognosticating on Sora — you’re likely looking at early steps into a major new capability, one that could be used for good or malicious purposes, and while it’s possible to oversell it, it’s also very easy to sell it short.

Instead of overcommitting to any particular perspective on what Sora and its successors will or won’t be able to do, it’s worth admitting some uncertainty about where this is headed. It’s much easier to say, “This technology will keep improving by leaps and bounds” than to guess the specifics of how that will play out.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

rn rn vox-markrn rn rn rn rn rn“,”cross_community”:false,”internal_groups”:[{“base_type”:”EntryGroup”,”id”:112402,”timestamp”:1708727704,”title”:”Approach — Brings clarity to chaos”,”type”:”SiteGroup”,”url”:””,”slug”:”approach-brings-clarity-to-chaos”,”community_logo”:”rnrn rn vox-markrn rn rn rn rn rn“,”community_name”:”Vox”,”community_url”:”https://www.vox.com/”,”cross_community”:false,”entry_count”:420,”always_show”:false,”description”:””,”disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”}],”image”:{“ratio”:”*”,”original_url”:”https://cdn.vox-cdn.com/uploads/chorus_image/image/73159036/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png”,”network”:”unison”,”bgcolor”:”white”,”pinterest_enabled”:false,”caption”:”A screenshot of a video generated by Sora, OpenAI’s generative video model.”,”credit”:”Sora/OpenAI CEO Sam Altman“,”focal_area”:{“top_left_x”:1200,”top_left_y”:548,”bottom_right_x”:1656,”bottom_right_y”:1004},”bounds”:[0,0,2856,1552],”uploaded_size”:{“width”:2856,”height”:1552},”focal_point”:null,”image_id”:73159036,”alt_text”:”An AI-generated video from Sora, OpenAI’s new generative video model, shows sea creatures like fish and dolphins with legs, riding bicycles on top of an ocean.”},”hub_image”:{“ratio”:”*”,”original_url”:”https://cdn.vox-cdn.com/uploads/chorus_image/image/73159036/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png”,”network”:”unison”,”bgcolor”:”white”,”pinterest_enabled”:false,”caption”:”A screenshot of a video generated by Sora, OpenAI’s generative video model.”,”credit”:”Sora/OpenAI CEO Sam Altman“,”focal_area”:{“top_left_x”:1200,”top_left_y”:548,”bottom_right_x”:1656,”bottom_right_y”:1004},”bounds”:[0,0,2856,1552],”uploaded_size”:{“width”:2856,”height”:1552},”focal_point”:null,”image_id”:73159036,”alt_text”:”An AI-generated video from Sora, OpenAI’s new generative video model, shows sea creatures like fish and dolphins with legs, riding bicycles on top of an ocean.”},”lede_image”:{“ratio”:”*”,”original_url”:”https://cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png”,”network”:”unison”,”bgcolor”:”white”,”pinterest_enabled”:false,”caption”:”A screenshot of a video generated by Sora, OpenAI’s generative video model.”,”credit”:”Sora/OpenAI CEO Sam Altman“,”focal_area”:{“top_left_x”:1200,”top_left_y”:548,”bottom_right_x”:1656,”bottom_right_y”:1004},”bounds”:[0,0,2856,1552],”uploaded_size”:{“width”:2856,”height”:1552},”focal_point”:null,”image_id”:73159041,”alt_text”:”An AI-generated video from Sora, OpenAI’s new generative video model, shows sea creatures like fish and dolphins with legs, riding bicycles on top of an ocean.”},”group_cover_image”:null,”picture_standard_lead_image”:{“ratio”:”*”,”original_url”:”https://cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png”,”network”:”unison”,”bgcolor”:”white”,”pinterest_enabled”:false,”caption”:”A screenshot of a video generated by Sora, OpenAI’s generative video model.”,”credit”:”Sora/OpenAI CEO Sam Altman“,”focal_area”:{“top_left_x”:1200,”top_left_y”:548,”bottom_right_x”:1656,”bottom_right_y”:1004},”bounds”:[0,0,2856,1552],”uploaded_size”:{“width”:2856,”height”:1552},”focal_point”:null,”image_id”:73159041,”alt_text”:”An AI-generated video from Sora, OpenAI’s new generative video model, shows sea creatures like fish and dolphins with legs, riding bicycles on top of an ocean.”,”picture_element”:{“loading”:”eager”,”html”:{},”alt”:”An AI-generated video from Sora, OpenAI’s new generative video model, shows sea creatures like fish and dolphins with legs, riding bicycles on top of an ocean.”,”default”:{“srcset”:”https://cdn.vox-cdn.com/thumbor/wyg-rGFxSOEOgf5Sd4w9mQtYTd8=/0x0:2856×1552/320×240/filters:focal(1200×548:1656×1004)/cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png 320w, https://cdn.vox-cdn.com/thumbor/0NvPIO9nPLVPdvJGrBJzS5249mk=/0x0:2856×1552/620×465/filters:focal(1200×548:1656×1004)/cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png 620w, https://cdn.vox-cdn.com/thumbor/_vapobJpYfv8-OpfBWtVsMbUc2o=/0x0:2856×1552/920×690/filters:focal(1200×548:1656×1004)/cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png 920w, https://cdn.vox-cdn.com/thumbor/dh_RXCJtQTHrxwsN_A8lFqdk-zk=/0x0:2856×1552/1220×915/filters:focal(1200×548:1656×1004)/cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png 1220w, https://cdn.vox-cdn.com/thumbor/OUL7vqhi2aZzHL2MIzp34tNTIZs=/0x0:2856×1552/1520×1140/filters:focal(1200×548:1656×1004)/cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png 1520w”,”webp_srcset”:”https://cdn.vox-cdn.com/thumbor/KoCT6Tk7aQ0YK3SAvn9k2dOCc4s=/0x0:2856×1552/320×240/filters:focal(1200×548:1656×1004):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png 320w, https://cdn.vox-cdn.com/thumbor/pqqCsh21KmLeewwP-6miJgb29tg=/0x0:2856×1552/620×465/filters:focal(1200×548:1656×1004):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png 620w, https://cdn.vox-cdn.com/thumbor/SXTfwGY_-jfsU_WlZ5LP0S4TX7k=/0x0:2856×1552/920×690/filters:focal(1200×548:1656×1004):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png 920w, https://cdn.vox-cdn.com/thumbor/rpIqql0jSnT86d55tXb7WrQgY6c=/0x0:2856×1552/1220×915/filters:focal(1200×548:1656×1004):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png 1220w, https://cdn.vox-cdn.com/thumbor/kGMDEBrG5MZQ3jgibFxaYstuLys=/0x0:2856×1552/1520×1140/filters:focal(1200×548:1656×1004):format(webp)/cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png 1520w”,”media”:null,”sizes”:”(min-width: 809px) 485px, (min-width: 600px) 60vw, 100vw”,”fallback”:”https://cdn.vox-cdn.com/thumbor/W566EHqCYrPGif0YFc4KYLUDqEo=/0x0:2856×1552/1200×900/filters:focal(1200×548:1656×1004)/cdn.vox-cdn.com/uploads/chorus_image/image/73159041/Screen_Shot_2024_02_22_at_2.15.33_PM.0.png”},”art_directed”:[]}},”image_is_placeholder”:false,”image_is_hidden”:false,”network”:”vox”,”omits_labels”:false,”optimizable”:false,”promo_headline”:”What two years of AI development can tell us about Sora”,”recommended_count”:0,”recs_enabled”:false,”slug”:”future-perfect/24080195/sora-openai-sam-altman-ai-generated-videos-disinformation-midjourney-dalle”,”dek”:”If you want to know the future of OpenAI’s latest tool, take a look at Midjourney and DALL-E 2.”,”homepage_title”:”What two years of AI development can tell us about Sora”,”homepage_description”:”If you want to know the future of OpenAI’s latest tool, take a look at Midjourney and DALL-E 2.”,”show_homepage_description”:false,”title_display”:”What two years of AI development can tell us about Sora”,”pull_quote”:null,”voxcreative”:false,”show_entry_time”:true,”show_dates”:true,”paywalled_content”:false,”paywalled_content_box_logo_url”:””,”paywalled_content_page_logo_url”:””,”paywalled_content_main_url”:””,”article_footer_body”:”At Vox, we believe that clarity is power, and that power shouldn’t only be available to those who can afford to pay. That’s why we keep our work free. Millions rely on Vox’s clear, high-quality journalism to understand the forces shaping today’s world. Support our mission and help keep Vox free for all by making a financial contribution to Vox today. rn”,”article_footer_header”:”Will you help keep Vox free for all?“,”use_article_footer”:true,”article_footer_cta_annual_plans”:”{rn “default_plan”: 1,rn “plans”: [rn {rn “amount”: 50,rn “plan_id”: 99546rn },rn {rn “amount”: 100,rn “plan_id”: 99547rn },rn {rn “amount”: 150,rn “plan_id”: 99548rn },rn {rn “amount”: 200,rn “plan_id”: 99549rn }rn ]rn}”,”article_footer_cta_button_annual_copy”:”year”,”article_footer_cta_button_copy”:”Yes, I’ll give”,”article_footer_cta_button_monthly_copy”:”month”,”article_footer_cta_default_frequency”:”monthly”,”article_footer_cta_monthly_plans”:”{rn “default_plan”: 0,rn “plans”: [rn {rn “amount”: 5,rn “plan_id”: 99543rn },rn {rn “amount”: 10,rn “plan_id”: 99544rn },rn {rn “amount”: 25,rn “plan_id”: 99545rn },rn {rn “amount”: 50,rn “plan_id”: 46947rn }rn ]rn}”,”article_footer_cta_once_plans”:”{rn “default_plan”: 0,rn “plans”: [rn {rn “amount”: 20,rn “plan_id”: 69278rn },rn {rn “amount”: 50,rn “plan_id”: 48880rn },rn {rn “amount”: 100,rn “plan_id”: 46607rn },rn {rn “amount”: 250,rn “plan_id”: 46946rn }rn ]rn}”,”use_article_footer_cta_read_counter”:true,”use_article_footer_cta”:true,”groups”:[{“base_type”:”EntryGroup”,”id”:76815,”timestamp”:1708950603,”title”:”Future Perfect”,”type”:”SiteGroup”,”url”:”https://www.vox.com/future-perfect”,”slug”:”future-perfect”,”community_logo”:”rnrn rn vox-markrn rn rn rn rn rn“,”community_name”:”Vox”,”community_url”:”https://www.vox.com/”,”cross_community”:false,”entry_count”:1799,”always_show”:false,”description”:”Finding the best ways to do good. “,”disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:”https://cdn.vox-cdn.com/uploads/chorus_asset/file/16290809/future_perfect_sized.0.jpg”,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:true},{“base_type”:”EntryGroup”,”id”:27524,”timestamp”:1708983002,”title”:”Technology”,”type”:”SiteGroup”,”url”:”https://www.vox.com/technology”,”slug”:”technology”,”community_logo”:”rnrn rn vox-markrn rn rn rn rn rn“,”community_name”:”Vox”,”community_url”:”https://www.vox.com/”,”cross_community”:false,”entry_count”:24589,”always_show”:false,”description”:”Uncovering and explaining how our digital world is changing — and changing us.”,”disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:false},{“base_type”:”EntryGroup”,”id”:80311,”timestamp”:1708696804,”title”:”Artificial Intelligence”,”type”:”SiteGroup”,”url”:”https://www.vox.com/artificial-intelligence”,”slug”:”artificial-intelligence”,”community_logo”:”rnrn rn vox-markrn rn rn rn rn rn“,”community_name”:”Vox”,”community_url”:”https://www.vox.com/”,”cross_community”:false,”entry_count”:426,”always_show”:false,”description”:”Vox’s coverage of how AI is shaping everything from text and image generation to how we live. “,”disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:false},{“base_type”:”EntryGroup”,”id”:102794,”timestamp”:1708730122,”title”:”Innovation”,”type”:”SiteGroup”,”url”:”https://www.vox.com/innovation”,”slug”:”innovation”,”community_logo”:”rnrn rn vox-markrn rn rn rn rn rn“,”community_name”:”Vox”,”community_url”:”https://www.vox.com/”,”cross_community”:false,”entry_count”:238,”always_show”:false,”description”:””,”disclosure”:””,”cover_image_url”:””,”cover_image”:null,”title_image_url”:””,”intro_image”:null,”four_up_see_more_text”:”View All”,”primary”:false}],”featured_placeable”:false,”video_placeable”:false,”disclaimer”:null,”volume_placement”:”lede”,”video_autoplay”:false,”youtube_url”:”http://bit.ly/voxyoutube”,”facebook_video_url”:””,”play_in_modal”:true,”user_preferences_for_privacy_enabled”:false,”show_branded_logos”:true}”>

$5/month

$10/month

$25/month

$50/month

Other

Yes, I’ll give $5/month

Yes, I’ll give $5/month


We accept credit card, Apple Pay, and


Google Pay. You can also contribute via



Related Articles

Back to top button