AI Entertainment Studios: How Gen AI Toolsets Are Transforming Production Workflows


AI Entertainment Studios: How Gen AI Toolsets Are Transforming Production Workflows

Still, they expect using generative AI in a workflow to diminish traditional previsualization and post-production stages

A new crop of independent entertainment studios has emerged, as VIP+ examined last week, bringing generative AI tools and models into film, TV and short-form video content production and designing professional workflows around them.

SEE ALSO: The New Breed of Entertainment Stidos Pioneering Gen AI-Powered Production

AI toolsets used on productions will change according to the needs of the project and as new tools or models become available. Several studios were agnostic to the tools they used and are integrating whatever performs best for the task.

For these studios, experimentation to discover production methods with generative AI ranks as a priority versus ongoing legal murkiness and ethical qualms that result from many AI models training on copyrighted data, a reality of how the current generation of AI models have been developed. However, AI film animation studio Asteria intends to transition to using Marey, a forthcoming image and video model developed by its partner AI research firm Moonvalley, trained exclusively on licensed data.

Overall, AI studios were well versed in image and video foundation models, referencing Midjourney, Flux or Leonardo (a tool built on Stable Diffusion) and Runway, Kling, Minimax, Luma AI, Haiper and others. Studios also referenced using AI tools that have been developed for markerless motion capture (e.g., Wonder Dynamics), lip-sync (e.g., Hedra, Flawless), style transfer (e.g., Runway's Act One) and voice (e.g., ElevenLabs).

Several studios have custom built their own internal workflow solutions, sometimes referred to as proprietary tech stacks. Often, these are software that effectively aggregate an array of the studio's preferred models via open-source or APIs, with a main goal of streamlining workflows for internal creative teams.

This allows them to access multiple models within a single tool instead of inefficiently "bouncing around" between multiple web-based AI tools to output and modify content, which sources said was one of their key UX pain points. For example, Promise's workflow solution MUSE intends to offer a "streamlined, collaborative, and secure production environment" for artists.

For the same reason, multiple AI studio sources referenced using ComfyUI, which offers a single interface for accessing multiple AI tools to sequentially and specifically modify an AI output. "It's quite powerful and adds levels of control that a professional artist would need," said Eric Shamlin, CEO at Secret Level.

In some cases, AI studios developing their own workflow solutions also expect to license them out as enterprise creative and collaboration software, positioned for content production. Invisible Universe is in conversation with domestic and international studios and will be opening its cloud-based software, Invisible Studio, to a prosumer base, such as social media content creators. Likewise, Secret Level and Promise eventually intend to offer their respective tools, Liquid Studio and MUSE, under an SaaS model.

In theory, it's possible to create a fully AI production from end to end, where every element is synthetic. But in practice, AI studio sources described "hybrid" production workflows where they are intentionally incorporating human artists and their creative work, artificial intelligence automation where it provides value and traditional production methods where they are still needed or desired, including shooting with cameras, motion capture and traditional CGI. Sources noted that the needs of any given project have still often meant hiring human writers, actors, artists, animators and composers.

"We're AI-first or AI-forward, but it's not AI only. AI is a primary tool for us, but we're still doing live-action shoots, traditional CG, traditional animation. But those things are now accelerated or augmented by AI. It's very much a hybrid production model," said Shamlin.

Studio sources repeatedly said there was no such thing as a completely automated AI movie that still didn't require substantial manual edits to make raw AI outputs "look right," including to fix hallucinated artifacts and the "uncanny valley" feel still apparent in AI imagery.

In short, studios are searching for ways to make AI outputs look not like AI. Right now, there is more confidence about the ability to do this for animation than live action, although the photorealism of AI imagery is fast improving. Some also expressed early confidence they would find methods to "crack" live action.

Sources contend gen AI tools are restructuring the conventional production pipeline, normally a stepwise progression from previsualization to production (shoots) to post-production (CG rendering).

For example, some described a newfound ability to visualize an entire film upfront by using AI image and video tools to generate visuals for the entire film based on the script. These AI frames could then act either as a first cut of the film, where AI visuals would be further edited and refined with generative AI tools or traditional CGI techniques to become the final frames.

Alternatively, the initial AI frames would be treated as a high-fidelity storyboard and used to guide a production shoot with actors on set or with greenscreen.

Some interpreted that restructuring as inverting the pipeline to "post-first" or "post-to-pre," where upfront visualization allows them to see and iterate immediately on visuals that would otherwise have had to wait until production shoots or post-production VFX to CG-render. As a result, sources expected gen AI would diminish the duration and demands of both traditional pre- and post-production.

"As a studio, we're approving the movie long before anybody steps on any form of set or traditional tool," said Tom Paton, CEO at Pigeon Shrine.

"Conventional pre-production is kind of dead in our minds," said Shamlin, who heads the Secret Level, the studio that produced Coca-Cola's controversial AI-generated commercial in November. "We had a lot of the final frames you see in the final spot in pre-production in lieu of storyboards, because you can just go straight to the AI and start fully rendering, whereas in traditional production you'd have to wait until after you shoot. All that starts in pre-production now."

Forthcoming from VIP+

* Dec. 31: How AI studios are experimenting with image and video model fine-tuning

Previous articleNext article

POPULAR CATEGORY

corporate

10128

tech

11398

entertainment

12426

research

5640

misc

13165

wellness

10019

athletics

13149