An AI-generated image from the Amazon series House of David.
Art: Courtesy of Prime
This article was featured in One Great Story, New York’s reading recommendation newsletter. Sign up here to get it nightly.
One recent evening on the Eastside of Los Angeles, a couple hundred people gathered in a cavernous old soundstage to celebrate the arrival of a new AI studio — one of the nearly 100 now operating in Hollywood. Called Asteria Film Co., it was founded by Bryn Mooser, a serial entrepreneur, and his girlfriend, the actress and writer Natasha Lyonne. Mooser, 45, is tall with a sculpted jaw and salt-and-pepper beard; he fits the role of a rumpled yet elegant pitchman so well that Cartier shot an advertorial of him titled “The Entrepreneur.” He led me through the studio: a 25,000-square-foot collection of soundstages and a workshop built in 1916 by the producer Mack Sennett, who pioneered new uses of scenic backdrops in early filmmaking. In the lobby, he paused at a glass-encased architectural model of the studio as it looked when it was first built. “There was no roof because they were just using sunlight,” he told me. “It was before electricity.” As Mooser saw it, Asteria fit into a lineage of creatives who had ushered in new eras of filmmaking. He reminded me that Walt Disney was a technologist. So was George Lucas. “The story of Hollywood is the story of technology,” he said.
We met Lyonne in a corner of the studio where uncanny clips generated by Asteria’s AI model were playing on a dozen old-school televisions. The footage was unnerving. Robots with smooth, blank faces typed blindly in an old-fashioned office. Disembodied mannequin heads drifted in space. Lyonne was drinking a sugar-free Red Bull and wearing a structured black velvet jacket with a plunging neckline. She was fresh off a plane from Seattle, where she had done a talk with the science-fiction author Ted Chiang, who has written at length about why AI will never create great art. Over the past few years in Hollywood, it had become clear to Lyonne that many people were not being forthright with how often they were using the technology. “If I’m directing an episode, I like to get really into line items and specifics,” she said. “And you find out that there’s a lot of situations where they’re calling it machine learning or something but, really, it’s AI.” She had begun to do her own research. She read the Oxford scholar Brian Christian and the philosopher Nick Bostrom, who argues that AI presents a significant threat to humanity’s long-term existence. Still, she had come to feel it was too late to “put the genie back in that bottle.” “It’s better to get your hands dirty than pretend it’s not happening,” she said.
This seemed to be the general opinion of those milling around at the party. Many of Lyonne’s frequent collaborators and close friends were in attendance: the director Janicza Bravo, the actresses Clea DuVall and Tessa Thompson. As I wandered past clusters of studio executives in suits and filmmakers in baseball caps and jeans, I heard the word inevitable five times. A creative director in a cowboy hat said, “It’s happening whether we like it or not.” Waiting on line by the bar, I asked the comedian Tim Heidecker for his thoughts. “My thoughts are too complicated for a conversation at a party,” he replied. I found a CAA agent sitting in a mostly empty row of vintage wooden theater seats. At CAA, she said, the subject of AI comes up every day. Back in 2023, the agency had launched a project, called the CAA Vault, to capture the likeness of all its clients so it could own and control the rights to their images. “Everyone’s using it,” the agent said. “They just don’t talk about it.” Nearby, a former Netflix exec and current independent producer sipped red wine. She hadn’t told some of her colleagues she was attending the party that night. A director she works with recently told her if anyone at her production company used AI for any part of the process, he would leave. “If they hear me being curious,” she said, “they’re going to be so mad.”
Hollywood has lately been in what media observers have described as an “existential crisis,” an implosion, and a “death spiral.” Studios were making fewer films, and fewer people were watching the ones that got made. Layoffs were mounting. Depending on whom you asked, AI was either hastening the end or offering a lifeline. It had brought a proliferation of tools that were capable, to varying degrees of success, of creating every component of a film: the script, the footage, the soundtrack, the actors. Among the many concerns rattling nerves around town, one significant issue was that nearly every AI model capable of generating video had been trained on vast troves of copyrighted material scraped from the internet without permission or compensation. When the writers and actors unions ended their strikes in 2023, the new contracts included guardrails on AI — ensuring, for the moment anyway, that their members retained some measure of control over how the studios could use the technology. The new contracts barred studios from using scripts written by AI and from digitally cloning actors without explicit consent and compensation. But they left the door open for certain uses, particularly with generative video: Studios can use models to create synthetic performers and other sorts of footage — including visual effects and entire scenes — so long as a human is nominally in charge and the unions are given a chance to bargain over terms. Even with that latitude, the industry is hemmed in by a growing thicket of legal uncertainty, especially around how these systems were trained in the first place. Over 35 copyright-related lawsuits have been filed against AI companies so far, the outcomes of which could determine whether generative AI has a future in Hollywood at all. As one producer put it to me, “The biggest fear in all of Hollywood is that you’re going to make a blockbuster, and guess what? You’re going to sit in litigation for the next 30 years.”
Despite this fear, every major studio is forging ahead (though most are not issuing a press release describing their efforts). Along with the generative models developed by tech giants — Google, OpenAI — a number of artist-run studios created specifically with filmmakers in mind are making headway in the industry. That includes Asteria, which launched its model this year and came with an attractive selling point. Lyonne and Mooser say its model was trained only on licensed material; they were touting it as the first “ethical” studio. Runway, a technology and media start-up, was earlier to the scene, and its models are now among the most widely used in Hollywood; the company was also the first to strike a public partnership with a movie studio. Many of these studios are developing sophisticated methods of working with generative video — the kind that, when given a prompt, can spit out an image or a video and has the potential to fundamentally change how movies are made.
This spring, Darren Aronofsky announced a partnership with Google’s DeepMind. James Cameron teamed up with Stability AI, one of the tech companies making inroads in Hollywood. “In the New Year, many people in the studios woke up and said, ‘Okay, in 2025 we need to make a difference,’” said Prem Akkaraju, the CEO of Stability AI. “Because production is down, profitability is down, attendance at theaters is down. It’s harder to make a movie today than it ever has been.” Cameron put it more bluntly on a tech podcast recently: If audiences want more blockbusters, he said, “we’ve got to figure out how to cut the cost of that in half.” Erik Weaver, a producer and technologist who regularly talks with studios on how to use AI, had observed a “radical shift” over the past few months. A few weeks ago, in a meeting with a major studio, an executive told him that “almost every single one of our productions is coming to us and saying, ‘How can I use AI to get back on budget?’” Weaver added, “These filmmakers need $30 million to make their movie and they have about $15 million. They only have so much money, and they’re getting desperate.”
Asteria co-founders Bryn Mooser and Natasha Lyonne.
Photo: Myles Hendrik
Generative AI was still a fringe idea in 2018, little discussed outside academic and tech circles, when Cristóbal Valenzuela co-founded Runway. The 28-year-old self-taught programmer and artist had just graduated from an experimental art-and-tech program at NYU. For his thesis, he built a tool set that allowed artists to use AI to generate images, text, and video. By writing a sentence into an app, he could produce a rudimentary movie lasting a few seconds. The outputs weren’t any good and had no obvious commercial application, but the concept resonated with the artists and designers he knew. It wasn’t long before the company began to gain a foothold in Hollywood — not with studio heads but with people working below the line. Visual-effects artists were among the first to start experimenting with its tools. Rendering images using traditional VFX techniques is notoriously slow, so the appeal of fast, generative shortcuts was immediate.
By mid-2023, Runway’s model had improved significantly just as generative AI exploded into the public consciousness. Among other things, Runway could simulate realistic explosions and generate fantastical backdrops in minutes. That fall, Valenzuela flew to Los Angeles for a meeting with Michael Burns, the vice-chairman of Lionsgate, the company behind The Hunger Games, The Twilight Saga, and John Wick. Burns had taken meetings with some of the big tech companies trying to pitch their products to Hollywood, but they didn’t understand filmmaking. “This is like a foreign land for them,” Burns said. Valenzuela, though, was creating products specifically designed for filmmakers. “The consumer wants to see magnificent landscapes and special effects,” Burns recalled thinking. With Runway, he believed filmmakers could “make movies and television shows we’d otherwise never make. We can’t make it for $100 million, but we’d make it for $50 million because of AI.” He told Valenzuela, “Maybe we should buy you.” Valenzuela replied, “Oh, no, no, we’ll buy you.” This spring, as he had predicted, Runway’s valuation passed Lionsgate’s market capitalization, hitting $3 billion.
Lionsgate planned to turn over a selection from its archive of films so Runway could train a customized model on the studio’s proprietary catalogue. When I visited Burns in his corner office overlooking a residential neighborhood in Santa Monica, he was still marveling at the pace of transformation. “It’s insanity how fast it’s changing,” he said. “I’ll talk to Cristóbal and say, ‘Here’s what my guys really need next,’ and he goes, ‘Yeah, we’ll have that next week.’” That morning, Burns had asked Valenzuela to generate a trailer for a film they hadn’t even shot. The plan was to take it to a film festival — but first they would presell the film off AI-generated scenes reverse engineered from the script. It might cost only $10 million, but it would look closer to a $100 million movie. “We’re going to blow stuff up so it looks bigger and more cinematic,” he said.
The collaboration with Runway was open-ended and flexible. “We’re banging around the art of the possible,” Burns said. “Let’s try some stuff, see what sticks.” He listed some ideas they were currently considering. With a library as large as Lionsgate’s, they could use Runway to repackage and resell what the studio already owned, adjusting tone, format, and rating to generate a softer cut for a younger audience or convert a live-action film into a cartoon. He offered an example of one of Lionsgate’s signature action franchises: “Now we can say, ‘Do it in anime, make it PG-13.’ Three hours later, I’ll have the movie.” He would have to pay the actors and all the other rights participants, he added — “but I can do that, and now I can resell it.”
He gave another example. “We have this movie we’re trying to decide whether to green-light,” he said. “There’s a ten-second shot — 10,000 soldiers on a hillside with a bunch of horses in a snowstorm.” To shoot it in the Himalayas would take three days and cost millions. Using Runway, the shot could be created for $10,000. He wasn’t sure the film would be made at all, but the math was working.
There are still only a handful of publicly acknowledged uses of Runway — or any other generative-AI model — in mainstream film and television. In Everything Everywhere All at Once, the genre-bending Oscar winner, a VFX artist used Runway’s green-screen tool to efficiently remove background elements from images. The company also contributed to a score of shots in Amazon’s biblical epic House of David, including a 90-second fantasy sequence that opens the sixth episode; the executive producer Jon Erwin told me it otherwise wouldn’t have been possible on their timeline or budget. When I asked Valenzuela which other films in theaters or on streaming had used Runway, he hesitated. “I don’t think I can say,” he said. Privately, he was in substantial conversation — or active collaboration — with every major studio in Hollywood.
One studio executive told me most studios were too afraid of the unions to go public. Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator, said he had been struck by how cautiously studios have moved in the wake of the strikes. One provision the union secured was a standing agreement to meet with each major studio every six months for an update on how it was using generative AI. Crabtree-Ireland told me he had observed most studios taking steps that extended beyond the union’s contractual demands, setting up internal ethics boards and legal reviews. As far as he was aware, they seemed to be using AI not to replace performers but to enhance editing, clean up sound, and smooth over visual inconsistencies — for the moment, anyway. This restraint, he suggested, might well prove temporary — a strategic pause while studios wait for the public furor over AI to cool and the law to catch up.
Conversations with dozens of workers at every level suggested a different story, of off-the-books experimentation and plausible deniability. Roma Murphy, a writer and co-chair of the Animation Guild’s AI committee, had heard of “rogue actors” at studios — lower-level staffers under deadline pressure — asking workers to use AI without formal clearance. One animator who asked to remain anonymous described a costume designer generating concept images with AI, then hiring an illustrator to redraw them — cleaning the fingerprints, so to speak. “They’ll functionally launder the AI-generated content through an artist,” the animator said. Reid Southen, a concept artist and illustrator who has worked on blockbusters like The Hunger Games and The Matrix Resurrections, ran an informal poll asking professional artists whether they had been asked to use AI as a reference or to touch up their finished work. Nearly half of the 800 respondents said they had, including Southen. “Work has dried up,” he told me. Southen, who has worked in film for 17 years, said his own income had been slashed by nearly half over the past two years — more than it had during the early days of the pandemic, when the entire industry shut down. It’s becoming increasingly common for producers to cut out the artist entirely. “I know for a fact,” one producer said, “that some producers are developing shows and they need some art to pitch an idea.” Normally, they would pay an artist to do the art; now they’re just prompting. “If you’re a storyboard artist,” one studio executive said, “you’re out of business. That’s over. Because the director can say to AI, ‘Here’s the script. Storyboard this for me. Now change the angle and give me another storyboard.’ Within an hour, you’ve got 12 different versions of it.” He added, however, if that same artist became proficient at prompting generative-AI tools, “he’s got a big job.”
The Animation Guild’s contract allows studios to use AI only through a defined process that entails both a meeting between the animator and the studio and the opportunity for the animator to propose alternatives. As far as the Guild knows, no such meeting has yet taken place. Most of these off-the-record asks happen during early development: pitches, mood boards, preproduction. The work is invisible, the stakes low, the temptation high. “Any artist or designer worth their salt would push back on the quality, but we’re all busy fighting the clock, fighting the budget,” said Sam Tung, a storyboard artist and member of the Guild’s AI committee. “And if your back’s against the wall, it’s tempting — even if the result is of dubious quality and dubious ethical makeup.”
Another reason studios may be moving slowly to officially integrate certain forms of generative AI into their processes is that the technology simply isn’t good enough yet. In March, I visited Joel Kuwahara, the co-founder of Bento Box — the animation studio behind Bob’s Burgers and other network hits — at his production office in Burbank. Some Hollywood animators believe AI has no place in the creative process, but Kuwahara isn’t one of them. “I’m a tech lover,” he told me. He got his start over 30 years ago on The Simpsons and was part of the team that digitized the show in the mid-’90s, moving it from hand-painted cels to digital ink and paint. “If you look at AI as a productivity tool, animation is an industry that could use it,” he said. But most of the tools he’d tested so far hadn’t done what they claimed. Recently, he told me, he tried an AI storyboard generator. He was working on a board with a standard establishing shot of a house. “It was terrible,” he said. “I was trying to prompt it to move the camera up ten degrees, and it gave me a whole new house.” Since the 1990s, despite decades of automation, the storyboard process for a half-hour animated show on network television has typically taken four to six weeks. “People will say, ‘Well, why can’t we make it three weeks?’” Kuwahara said. “And it’s like, ‘Because it takes time for a board artist to think about the scene. You can’t make your brain go faster.’ I’m not saying they couldn’t do it in three weeks, but you’re going to get a board that looks like it was boarded in three weeks.”
In an era of Netflix glut and endless Marvel spinoffs, there’s the question of how much quality still matters to studio heads or the viewing public. I spoke to one VFX artist and technologist who advises studios on how to incorporate AI into their workflows. He asked to remain anonymous because, as he put it, “most of the stuff I’m doing is locked in NDAs.” Lately, he said, when a director asked for a small element — a swirl of smoke or a spark of flame — he would create it using generative AI. It didn’t look as good as it would have if he’d used traditional VFX software like Houdini or Maya, but he didn’t think most people would notice the difference. “Oh, there’s quality lost,” he said. “But that’s only lost on the people who appreciate it, like fine wine.”
After sitting in meetings between studios and other AI companies, the VFX artist had come to a conclusion: Sometimes they were selling possible futures, not finished tools — promises that, in a few months or years, the technology would be able to generate anything from sophisticated action sequences to entire performances. Some of the meetings he’d attended lately had disturbed him. “I’ve listened to execs say, ‘So we can load a season of scripts into AI and it can churn out a show in a day?’ And I’m just like, Is that what you want? Meanwhile, all the people under them, who are also invited to listen, walk out of those meetings wanting to take a shower.”
In 2023, a group of artists sued Runway and several other generative-AI companies. In response to that suit (which is ongoing) and dozens of similar cases, the companies have largely argued that training their models on copyrighted material qualifies as fair use so long as the models do not reproduce identifiable elements of the original work. In our conversations, Valenzuela likened individual artworks or images to “sand on a beach” — too small and numerous to meaningfully influence the result. When I brought up the example of someone prompting a model to generate a video “in the style of Wes Anderson,” he said the ethical problem rested on the Wes Anderson enthusiast, not on Runway. “You can still do that without AI systems — shoot a film, color-grade it in the style of Wes Anderson, and promote it that way,” he said. “But the person who’ll get in trouble won’t be the camera or the editing software or the computer. It’ll be you.”
For Lyonne, the issue wasn’t just ripping off artists; it was that a company would be profiting off the stolen labor. (A devoted cinephile, she has said her funeral plans include a monthlong screening series at Film Forum.) When she and Mooser were first kicking around the idea of starting an AI movie studio a year and a half ago, it was Lyonne who suggested they try to make it ethical. “Why can’t we just make a clean model?” she recalled asking Mooser. “That would solve 97 problems.”
OpenAI and other tech companies have long maintained that it is impossible to create a generative-AI model without scraping the internet. The data required to teach these systems how to depict the world — visually, linguistically, emotionally — is simply too vast. Mooser thought that, with the right amount of money and talent, it might be possible. By that point, he had launched multiple tech companies and was skilled at getting powerful people onboard with his projects. His mentors, current and former, include Bob Iger and Elon Musk. (He says he and Musk remain in touch but it is “complicated.”) AI was his next act. In 2023, Mooser began meeting with studio heads, agents, and executives — including Iger and former Disney chairman Jeffrey Katzenberg — to discuss what he was calling the “Pixar of AI.” He imagined it as a collaboration between technologists and filmmakers, with a focus on animation.
During a meeting with Vinod Khosla — the VC billionaire who has famously spent years battling the State of California for the right to keep the public off his stretch of Pacific coastline — Khosla mentioned he was good friends with Steven Spielberg. “Steven Spielberg keeps asking me how AI is going to change the industry,” he told Mooser. “And I don’t know the answer, but I bet you would. So if I invested in you, I could bring him down and you could tell him.”
Khosla decided to invest. So did Hemant Taneja, the CEO of General Catalyst and one of the most powerful evangelists for AI in the world. He saw Asteria, in part, as a public-relations project for artificial intelligence itself. “If you want to transform an industry,” he told me, “you have to win the hearts and minds in society. Storytelling is a big part of that.” For decades, he said, the popular image of AI was dystopian — killer robots, sci-fi apocalypse. Those narratives need to change, he said: “We’re trying to show the proof points. That AI isn’t just a risk; it’s an opportunity. It makes everybody superhuman.” He liked the idea of a clean model because he felt it would help build trust among those he needed to convince.
Taneja connected Mooser with Moonvalley — a group of former engineers from Google’s DeepMind — to build the model. (They named it Marey, after the French physiologist who invented chronophotography, an early motion-capture technique.) The Moonvalley team, Mooser said, had figured out how to train models by using less data and licensing it directly from filmmakers, libraries, archivists, and a new crop of businesses known as AI data brokers. “Our timing was right,” Mooser told me. “If it was six months earlier that we tried to do it clean, I’m not sure it would have been possible. It was still very expensive.” Its model remains something of a black box. Mooser promised to provide me with details about where and how exactly the company had managed to pay for and acquire a sufficient trove of data, but in the end, a spokesperson for Moonvalley declined to share that information, claiming it was confidential.
Moonvalley, now Asteria’s parent company, has raised over $100 million so far. According to those who work in visual effects, the company has assembled an impressive team. Paul Trillo, a director known for his inventive, technically experimental work, joined as a strategic partner and has already helped shape Asteria’s hybrid-animation pipeline, which draws on traditional filmmaking and AI techniques. Benjamin Lock, the VFX supervisor for Rogue One: A Star Wars Story and Ready Player One, now heads up Asteria’s visual-effects initiatives. One VFX artist who had observed Asteria’s filmmaking process described it as far from “pushing buttons and prompting. They’re actually doing, like, 20 things to get the image out,” he said. “When you see a Paul Trillo piece, it looks effortless, but I’ve seen the steps. It’s not easy.”
A recent music video offers a glimpse into how Asteria is blending hand-drawn artistry with AI: For the project, Trillo collaborated with the L.A. illustrator Paul Flores, who hand-drew 60 original images. The team used those to train a custom AI model in Flores’s style, allowing them to generate hundreds of additional images. From there, they used a 3-D generation tool to create a digital version of a city — nicknamed Cuco Town — that enabled more dynamic camera movements and recurring environments. A team of 20 people ultimately transformed Flores’s sketches into a trippy animated short with a layered, painterly look.
Like Runway, Asteria is meeting with almost every major studio and has partnered with some of them, though it has yet to announce any of these collaborations. It also has a number of projects in development, including an adult animated-short series called The Odd Birds Show. Lyonne is currently in preproduction with Asteria on a sci-fi film she’s creating with the filmmaker Brit Marling and the futurist Jaron Lanier, a pioneer of virtual reality. Lyonne’s interest in futurism began while she was filming Russian Doll, her critically acclaimed Netflix series about a woman who keeps dying and reliving the same night. “Russian Doll III in a way is what I’m doing here,” she said. She didn’t want to say much about the screenplay, beyond that she imagined it as part ’90s indie, part “Chutes and Ladders open-world gaming.” She had not yet used any AI while working on it but planned to use Marey to generate the visuals in the film’s second half. She had attempted to use it for screenwriting, but the results didn’t meet her standards. “I have not figured out at all how to use it in writing that I actually feel good about,” she said. “I want it so badly to work.”
For Lyonne, the draw of AI isn’t speed or scale — it’s independence. “I’m not trying to run a tech company,” she told me. “It’s more that I’m a filmmaker who doesn’t want the tech people deciding the future of the medium.” She imagines a future in which indie filmmakers can use AI tools to reclaim authorship from studios and avoid the compromises that come with chasing funding in a broken system. “We need some sort of Dogme 95 for the AI era,” Lyonne said, referring to the stripped-down 1990s filmmaking movement started by Lars von Trier and Thomas Vinterberg, which sought to liberate cinema from an overreliance on technology. “If we could just wrangle this artist-first idea before it becomes industry standard to not do it that way, that’s something I would be interested in working on. Almost like we are not going to go quietly into the night.” Of course, not everyone is convinced by Lyonne’s vision of democratization. The doomsday scenario is easy to imagine: powerful tools controlled by even more powerful companies, producing infinite, low-cost content.
Not long ago, Lyonne had an opportunity to speak with David Lynch, one of the giants of a previous generation of filmmakers and an early convert to digital cameras. Before he died, Lynch had been her neighbor. One day last year, she asked him for his thoughts on AI. Lynch picked up a pencil. “Natasha,” he said. “This is a pencil.” Everyone, he continued, has access to a pencil, and likewise, everyone with a phone will be using AI, if they aren’t already. “It’s how you use the pencil,” he told her. “You see?”