The Hollywood film and visual effects (VFX) industry remains in a state of uncertainty regarding the use of generative artificial intelligence (AI) for imagery and video production. This hesitation stems from three primary concerns: copyright ambiguity, data security risks, and insufficient tool performance. While there have been early efforts to explore generative AI’s potential, such as Lionsgate’s collaboration with AI video startup Runway, major Hollywood studios and VFX companies are still navigating what is permissible and what is not.
Copyright Ambiguity
The most significant challenge preventing widespread adoption of generative AI in film and television is the uncertain legal landscape surrounding copyright issues. AI-generated content falls into a gray area where ownership and licensing rights are not clearly defined. Studios worry about using AI models trained on datasets that may contain copyrighted material without proper licensing, potentially leading to legal repercussions. Additionally, AI-generated outputs cannot currently receive copyright protection unless modified significantly by a human creator, but the extent of modification required remains unclear according to the U.S. Copyright Office.
A major roadblock is the lack of transparency from AI developers regarding the datasets used to train their models. Many generative AI models rely on vast datasets scraped from the internet, including copyrighted images and videos. This lack of disclosure makes it impossible for studios to determine whether a given AI tool poses a legal risk, leading them to err on the side of caution and avoid such tools altogether.
One VFX professional summed up the situation: “We have clients on the film side who don’t want to go anywhere near generative AI until there’s legal clarity. Various court cases still need to unfold before we can fully assess the risks.”
Data Security Concerns
Beyond copyright issues, data security is another critical concern for Hollywood studios. In order to ensure the confidentiality of proprietary content, studios require complete control over any data used in AI-generated video production. However, most generative AI tools operate on cloud-based platforms, which means any data uploaded could potentially be stored, accessed, or repurposed by the service provider.
Studios also worry about the possibility of AI models being trained on their content without permission. Some major studios categorically refuse to use generative AI tools that lack clear guarantees regarding data isolation and security. For instance, studios have expressed particular caution regarding AI tools developed in regions with stringent data-sharing regulations, such as China, where concerns over data sovereignty and unauthorized access remain high.
A VFX source explained, “If we’re working on a scene with an A-list actor, we simply can’t risk that footage going anywhere near a cloud-based generative AI tool. That’s a complete no-go.”
Tool Performance and Reliability
Despite recent advancements in AI-generated video technology, the quality and control over outputs still fall short of professional industry standards. While some AI-generated clips may look impressive on platforms like YouTube, they often lack the precision required for high-end film and TV production.
Even if legal and security concerns were resolved, the current generation of AI tools lacks the control and fidelity needed for seamless integration into major productions. VFX professionals note that generative AI can be useful for limited applications, such as background establishers or adding slight movement to matte paintings. However, for complex sequences requiring detailed human interaction and precise cinematography, AI-generated video remains inadequate.
“Even if the legal ambiguity disappeared tomorrow, the technology still wouldn’t meet our quality standards,” said one VFX supervisor. “It’s just not there yet.”
The Industry’s Response
Faced with these challenges, Hollywood studios and VFX companies are taking a cautious but proactive approach. Many studios have established internal AI task forces to evaluate emerging technologies, track developments in copyright law, and determine acceptable use cases for generative AI. Some companies are investing in AI models trained exclusively on licensed or proprietary data to minimize legal risks.
VFX firms have also noted a stark contrast in attitudes toward AI among their clients. While brands and advertising agencies are more willing to experiment with AI-generated content, film and television studios remain highly conservative. Some studios have outright banned generative AI until legal and ethical concerns are addressed, while others are taking a more flexible approach by evaluating tools on a case-by-case basis.
A VFX source summarized the industry’s stance: “Every studio is trying to figure out their position on generative AI. Some don’t want to touch it at all until the law is settled. Others are open to using it as long as it’s disclosed and carefully vetted.”
The Need for Industry Standards
Given the complexity of the issue, many industry professionals are calling for a unified set of guidelines to govern the use of generative AI in Hollywood. There have been early discussions between studios, VFX houses, and industry organizations about creating a standardized set of criteria for AI tools that can be approved for production use.
Organizations like the Academy Software Foundation (ASWF) have been suggested as potential arbiters for developing an industry-wide “code of conduct.” Such a framework could include a list of AI tools vetted for security and copyright compliance, helping studios navigate what is currently a legal and ethical minefield.
However, developing a comprehensive industry standard remains a complex challenge. Different studios have different risk tolerances, and AI developers’ reluctance to disclose training data further complicates the process. Without clear legal precedents or transparent licensing frameworks, it is difficult to determine definitively which AI-generated content is permissible.
Looking Ahead
Despite the current roadblocks, generative AI is unlikely to be sidelined permanently. Some AI models are already being developed using “clean data” that minimizes copyright risks, and certain machine learning applications—such as image de-noising and face-swapping—have been widely accepted in the industry for years. The broader controversy surrounding generative AI has led to hesitation even for established AI-based tools, but experts predict that the industry will eventually adopt AI in a more structured manner.
Daniel Barak, VP and global executive director at R/GA, put it bluntly: “There’s no world where AI just gets ruled out entirely. You can’t leave that much power on the table and decide it’s unusable. The legalities need to be sorted, but this technology isn’t going away.”
As the legal landscape evolves and AI models become more transparent, the entertainment industry will likely find a balanced approach to integrating generative AI into its workflows. For now, though, Hollywood remains in a holding pattern, waiting for the necessary clarity before fully embracing AI-generated video content.
Validate your login
Sign In
Create New Account