May 1, 2023 - The future is now. Generative artificial intelligence ("AI") is being used to generate new content, including text, images, animation, video, software code and music.  Although AI itself is not new, it has come into sharp focus in recent months, accelerated by the availability and widespread adoption of user-friendly programs such as ChatGPT1, DALL-E, Stable Diffusion/Dream Studio, Midjourney, Jasper and CopyAI. The only thing that seems to be outpacing the technological advancements AI is bringing on, is the complex array of legal issues it is creating. 

There are numerous content-related applications for AI among media and entertainment companies, depending on the type of company (e.g., publisher, video game company, creative agency, studio/production company) and the specific use cases.  Potential applications include generating ideas and content, fact-checking, editing text, moderating misinformation and content targeting and ranking.  Already, to name a few of the use cases among media and entertainment companies, we have seen publishers such as CNET using AI to assist in the writing of certain content2 and BuzzFeed embracing AI to enhance and personalize certain types of content offerings, video game creators such as Naughty Dog using AI to create the environment for the highly popular “The Last of Us” video game, creative agencies using AI for different content creation, and entertainment companies adopting AI technologies to augment visual effects, preserve and colorize film, localize content, alter actors’ facial expressions3 , age and de-age actors’ faces4, and generate synthetic human voices.  This article will focus on content (not code) generation applications and key U.S. legal considerations bearing upon them. 

The legal landscape surrounding the use of AI for content applications is uncertain and rapidly evolving; some have likened the current stage of AI to the early days of Napster.5 The use of AI tools involves legal and reputational risks that clients (and media and entertainment companies in particular) must carefully manage. We address below some of the legal and ethical considerations associated with the creation of so-called “synthetic media.”

  • Intellectual Property and Confidentiality:
    • Copyright Infringement: Can AI-generated content (“outputs”) be considered a derivative work or implicate the right of reproduction?6 Is it infringement or inspiration? Providers of AI tools use data-scraping to train their AI models (“inputs”).7 To the extent that there is copyright infringement or that such data-scraping otherwise constitutes copyright infringement or a Digital Millennium Copyright Act violation due to the removal of copyright management information, how strong are the arguments that such uses are permitted under the fair use doctrine or an implied license?8 With respect to the fair use question, are AI outputs sufficiently transformative to be eligible for a fair use defense?9 A few closely watched lawsuits are expected to provide some clarity on these issues10 and the U.S. Copyright Office’s recently launched “Artificial Intelligence Initiative”11 will examine the copyright law and policy issues raised by AI, including not just the scope of copyright protection in AI-generated works but also the use of copyrighted materials in AI training. Who is liable when the AI's output is deemed to infringe someone else's copyright? If an AI tool provider is found to have infringed third-party copyrights, could a media company using such a tool be liable for infringement as well? It bears mentioning that copyright infringement can be direct, contributory, or vicarious. To complicate matters further, some publishers and other content producers could find themselves on both sides of the AI usage aisle — as content owners seeking to be paid for the use of their content to train AI models and as users of AI tools to generate their content.12 Finally, to what extent can content owners look to copyright law to protect against deepfakes, particularly where the AI was trained upon that content owner’s copyrighted work? 13
    • Copyright Protectability: The U.S. Copyright Office recently denied an attempt to register copyright in individual images created using the Midjourney AI tool.14  At the same time, the Office said that it “will register works that contain otherwise unprotectable material that has been edited, modified, or otherwise revised by a human author, but only if the new work contains ‘a sufficient amount of original authorship’ to itself qualify for copyright protection.”15 While this decision could be appealed, it provides directional guidance and suggests that the more human alteration and involvement is involved in the creative process, the more likely the creator will be able to claim copyright in the finished work. On March 10, 2023, the U.S. Copyright Office published guidance on the copyrightability of works created using generative AI.16 That guidance reaffirms that some amount of “creative input or intervention from a human author” is required.” But, of course, that begs the question: how much?
    • Trademark Infringement and Unfair Competition: In its lawsuit against Stable Diffusion in the United States District Court for the District of Delaware, Getty Images asserts, inter alia, that the inclusion in Stable Diffusion/DreamStudio’s outputs of Getty Images marks or visually degraded versions thereof give rise to claims of trademark infringement, unfair competition, trademark dilution and deceptive trade practices.Could media companies that publish such outputs have liability for doing so? Relatedly, to what extent may unfair competition (e.g., passing off) causes of action be relied upon to protect against deepfakes?
    • Right of Publicity: AI may be used to alter an individual’s voice and image, including through the creation of deepfakes, thereby raising questions about the extent to which an individual can control the use of their voice or image for new AI-generated content.  The actor James Earl Jones reportedly granted a Ukrainian startup a license of his voice, allowing the company to recreate his iconic Darth Vader voice using AI. The musicians Drake and The Weekend reportedly did not when a creator named “Ghostwriter” purportedly used AI to generate an entirely new song in the artists’ vocal styles. Depending on the nature of the AI use and whether the individual’s voice or image is recognizable, state right of publicity statutes, which exist in some, but not all, states and vary among those states in which they exist, may provide protection for the individual (and, in some states and under certain circumstances, the individual’s heirs) against use of the individual’s voice and image (in addition to potential copyright claims where the voice or image was taken from, or resembles elements of, a prior copyright-protected work).
    • Trade Secrets and Confidential Information:  A media company’s inputs into an AI tool may be used to train the AI tool’s model, thereby leading to the risk that those inputs could be included in outputs to a third-party user.17 Given that trade secret laws require that trade secrets be maintained in secrecy, the inputting of trade secrets creates a risk of loss of trade secret protection.  The inputting of third-party confidential information held by a media company could similarly run afoul of the media company’s non-use or non-disclosure obligations.
    •  Insurance Considerations: Media companies should ascertain the position of their Errors & Omissions liability insurance carriers on the use of AI and what pre-publication or pre-broadcast review processes their carriers may require.
  • Consumer Protection: With the proliferation of virtual influencers,18  which could potentially be AI-powered, the Federal Trade Commission has proposed revisions to its Guides Concerning the Use of Endorsements and Testimonials in Advertising19 (the "Guides”) that would include virtual influencers.20  Thus, brands that work with virtual influencers would need to disclose their connection and otherwise comply with the Guides.  This raises the question of how, in the context of a virtual influencer, the Federal Trade Commission would enforce the requirement that influencers’ endorsements reflect the honest opinions of the influencer and that the influencer be a genuine user of the product. Further, companies should consider disclosing that the influencer is not human.
  • Content Integrity:  AI outputs may be factually inaccurate or even false, thereby creating a risk that publication of content based on those outputs could lead to defamation claims. AI tools may inadvertently plagiarize a previous work. AI also has the potential to complicate efforts to validate the identity of sources and to make more challenging reliance during the research process on supposed media reports or social media.  AI data set inputs and algorithms may include biases that result in biases in output content. Further, given the risk that AI tool inputs could be included in outputs to a third-party user, the identity of confidential sources could be exposed if inputted into AI.  Having clear content integrity guidelines relating to the use of AI21 may help media companies mitigate some of these risks, and companies should consider requiring human review of any investigative or other news reportage generated using AI22, or even wholesale prohibition on the use of AI by its employees and contractors in connection with that content.
    • Regulatory and Compliance:
      • Computer Fraud and Abuse Act of 1986 (“CFAA”):  Does data-scraping by AI tool providers of sites whose terms of service prohibit such activities create the risk of claims under the Computer Fraud and Abuse Act, which prohibits accessing a computer without, or in excess of, authorization and carries criminal liability? If so, are there circumstances in which a publisher or other media company could be found guilty of aiding and abetting the commission of such an offense? The Supreme Court narrowed application of the CFAA a couple years ago,23 but data-scraping, and in particular the manner in which it is performed, may still subject one to a CFAA charge and remains fraught with legal peril.24
      • Data Privacy and Protection Violations: The use of data sets containing personal data by providers of AI tools to train AI models or as inputs by media companies to generate content implicates applicable data protection laws. Users of personal data for these purposes may be subject to substantial penalties25 if they do not obtain that personal data in compliance with such laws.
      • Section 230 of the Communications Decency Act: The Communications Decency Act generally provides immunity for an interactive computer service (a content host) with respect to content posted by an information content provider (a content creator).  For media companies that host user-generated content, could the fact that hosted content is generated by AI affect such companies’ immunity under the Act and, if so, then under what circumstances?26
  • Labor: Certain labor unions such as the Writers Guild of America and the Screen Actors Guild - American Federation of Television and Radio Artists have begun to stake out their positions on the use of their members’ creative material and performances as AI inputs and outputs.27 It remains to be seen how this issue will play itself out in new collective bargaining agreements.
  • Contractual Risks: For companies licensing, syndicating or commissioned to produce content, how does the use of AI tools impact their ability to make representations and warranties regarding originality, ownership and non-infringement and their exposure in relation to indemnification provisions? For example, the Terms of Use for OpenAI (parent of ChatGPT and DALL-E) provide that, as between OpenAI and the user and to the extent permitted by applicable law, the user owns all input. The Terms of Use further provide that Open AI assigns all rights in the output to the user.29 Similarly, the API Terms of Service for Stability AI (parent of Stable Diffusion and DreamStudio) provide that, as between Stability AI and the user, the user owns the output to the extent permitted by applicable law, and the user represents that it owns the input.30 At the same time, OpenAI’s Terms of Use make an exception for output generated by a prompt that has been inputted by another user, and provide that this output cannot be owned by any of the parties making that same prompt. Companies may not be in a position to provide clear chain of title and make representations and warranties with respect to content generated using AI tools. This consideration may also arise in an M&A context. As with open source software, buyers may want to consider diligencing the seller’s use of AI and addressing any associated risks in the relevant purchase or merger documents.
  • As media and entertainment companies determine how AI can play a role in optimizing their content-creation generation processes, they should consider developing robust governance around the use of AI, hand-in-hand with responsible, self-regulatory codes of conduct28 and best practices to mitigate legal and reputational risk and preserve brand safety.

    Consideration should be given to adopting the following specific practical steps and guardrails:

    • Identification of Use Cases and Risk Assessment:  Companies should identify the specific types of applications for which their employees and contractors could use AI in connection with the generation of content. Once these potential applications have been identified, companies should rank different types of use cases and categories of synthetic media based on level of risks. For example, publishing content that is being syndicated to third parties and investigative and other news content may be deemed high risk, such that a company prohibits the use of AI in connection with that content. As another example, companies should consider prohibiting the use of trade secrets and confidential materials as inputs into AI tools. Regardless, companies should strongly consider requiring any employees using an AI tool to utilize any settings or other mechanisms offered by that AI tool that allow the user to disable the AI tool from using such users’ chats to train the AI tool’s model.
    • Tracking:  Companies should develop rights management system mechanisms for tracking the use of AI and any content generated using AI.29 These tracking mechanisms should identify the AI tool used to generate the relevant outputs and include a copy of the AI tool’s terms of use/service posted on the date of use.  Such tracking mechanisms should also identify the role that humans played in generating the content, including the degree of human alteration.
    • Auditing:  Companies should perform periodic internal audits of content to determine whether it was generated by AI tools in compliance with company policies.30  
    • Oversight:  Companies should consider designating content integrity personnel to oversee and monitor the use of AI, especially in permitted higher-risk use cases.
    • Training:  Employees and contractors who generate content should periodically undergo training in the appropriate use of AI and related company policies.  Companies may want to further consider requiring their employees and contractors to certify that they have reviewed company policies regarding AI.
    • Transparency and Disclosure:  Companies should consider identifying (e.g., by applying disclosures to) content generated using AI when publishing or otherwise disseminating or sharing that content.31This identification may specify the particular manner in which AI was used in connection with generation of that particular content.  As an example (and to help support the argument for copyright registrability), for content created using AI tools, companies could include explanations in the end credits detailing exactly how AI impacted the final work and how much the work was altered by humans.
    • Contractual Restrictions and Terms of Use: Just as companies include restrictions in third-party contractor agreements on the use of open source software, companies should also consider restricting the use of AI tools without company approval by third-party contractors who create content.  Companies should develop policies related to the outbound licensing or assignment to third parties of rights in employee-created and contractor-created synthetic media. Companies should also review their website Terms of Use to ensure that they explicitly prohibit data-scraping of their websites.

    The introduction of AI for creation of synthetic media is a potentially transformative moment for the media and entertainment industries, but carries with it significant uncertainties. As the landscape rapidly evolves, the implementation of robust governance and frameworks may help to mitigate some of the legal and reputational risks that media and entertainment companies face when deploying AI as part of their content generation processes.

    1.  ChatGPT reached 100 million users within two months after its launch, becoming “the fastest-growing internet service ever.” Will Douglas Heaven, ChatGPT is everywhere. Here’s where it came from, MIT Technology Review (Feb. 8, 2023), ChatGPT is everywhere. Here's where it came from | MIT Technology Review↩︎
    2.  Connie Guglielmo, CNET is testing an AI Engine. Here’s What We’ve Learned, Mistakes and All, CNET (Jan. 25, 2023), CNET Is Testing an AI Engine. Here's What We've Learned, Mistakes and All - CNET 
      ↩︎
    3.  Brian Contreras, A.I. is here and it’s making movies. Is Hollywood ready? (December 19, 2022), A.I. is here, and it's making movies. Is Hollywood ready? - Los Angeles Times (latimes.com) 
      ↩︎
    4.  George Winslow, Metaphysic Partners with CAA to Expand Use of Generative AI in Film, TVTech (Jan. 31, 2023), Metaphysic Partners with CAA to Expand Use of Generative AI in Film, TV | TV Tech (tvtechnology.com)↩︎
    5.  James Vincent, The lawsuit that could rewrite the rules of AI copyright, The Verge (Nov. 8, 2022), The lawsuit against Microsoft, GitHub and OpenAI that could change the rules of AI copyright - The Verge 
      ↩︎
    6.  There is disagreement over the likelihood that AI tools will copy existing works in their outputs.  A recent research study found that certain AI models memorize and regenerate individual images used as inputs to train the model.  Extracting Training Data from Diffusion Models, Cornell University (Jan. 30, 2023), [2301.13188] Extracting Training Data from Diffusion Models (arxiv.org)↩︎
    7.  The USPTO has stated that the training process “will almost by definition involve the reproduction of entire works or substantial portions thereof.” Generative Artificial Intelligence and Copyright Law, Congressional Research Service (Feb. 24, 2023), Generative Artificial Intelligence and Copyright Law (congress.gov)↩︎
    8. To determine whether the use of a work is fair use, four non-exclusive statutory factors must be considered: (1) the purpose and character of the use, including whether it is of a commercial nature or is for nonprofit educational purposes; (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and (4) the effect of the use on the potential market for or value of the copyrighted work. See Generative Artificial Intelligence and Copyright Law, Congressional Research Service (Feb. 24, 2023), Generative Artificial Intelligence and Copyright Law (congress.gov); 17 U.S.C. § 107. ↩︎
    9.  The first of the four statutory fair use factors focuses on the purpose and character of the use. For this factor, courts typically inquire into (1) whether the use is of a commercial nature, and (2) whether the use is transformative. Courts are more likely to consider transformative uses as fair. A use is transformative if it “add[s] something new, with a further purpose or different character, and do[es] not substitute for the original use of the work.” U.S. Copyright Office Fair Use Index (Feb. 2023), U.S. Copyright Office Fair Use Index. ↩︎
    10.  Getty Images (US), Inc. v. Stability AI, Inc., No. 123-cv-00135 (D. Del., filed Feb. 3, 2023); Andersen et al v. Stability AI Ltd. et al, Docket No. 3:23-cv-00201 (N.D. Cal. Jan. 13, 2023). ↩︎
    11.  Copyright Office Launches New Artificial Intelligence Initiative, NewsNet Issue No. 1004, U.S. Copyright Office (Mar. 16, 2023), https://www.copyright.gov/news...; ↩︎
    12. Keach Hagey, Alexandra Bruell, Tom Dotan, and Miles Kruppa, Publishers Prepare for Showdown With Microsoft, Google Over AI Tools, WSJ (Mar. 22, 2023), Publishers Prepare for Showdown With Microsoft, Google Over AI Tools - WSJ.  ↩︎
    13.  Willman, Chris, AI-Generated Fake 'Drake'/'Weeknd' Collaboration, ‘Heart on My Sleeve,’ Delights Fans and Sets Off Industry Alarm Bells, Variety (Apr. 17, 2023), AI-Generated Fake 'Drake'/'Weeknd' Collaboration Alarms Industry - Variety ↩︎
    14. U.S. Copyright Office, Zarya of the Dawn (Feb. 21, 2023), 2023.02.21 Zarya of the Dawn Letter (copyright.gov) (reasoning that the images generated by Midjourney Technology were “not the product of human authorship.”) ↩︎
    15.  Id. at 11. ↩︎
    16. Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence, 88 Fed. Reg. 16190 (Mar. 16, 2023). ↩︎
    17.   See Terms of use, OpenAI (Mar. 14, 2023), Terms of use (openai.com) (stating “Input and Output are collectively ‘Content.’ [. . .] OpenAI may use Content as necessary to provide and maintain the Services, comply with applicable law, and enforce our policies.”); Sharing & publication policy, OpenAI (Nov. 14, 2022), Sharing & publication policy (openai.com); Usage policies, OpenAI (Mar. 23, 2023), Usage policies (openai.com); see also, STABILITY AI API Terms of Service, Stability AI (Dec. 14, 2022), Platform (stability.ai) (stating “Stability and our affiliates may use the Content to develop and improve the Services [. . .]”). Note that OpenAI offers a qualified opt-out mechanism that gives users the option to disable ChatGPT from using chats to train its model. See New ways to manage your data in ChatGPT, OpenAI (Apr. 25, 2023), New ways to manage your data in ChatGPT (openai.com) (noting “When chat history is disabled, [OpenAI] will retain new conversations for 30 days and review them only when needed to monitor for abuse, before permanently deleting.”). ↩︎
    18.  In 2020, it was proclaimed that virtual influencers commanded “three times higher engagement than [a] human influencer[].” Matt Klein, The Problematic Fakery of Lil Miquela Explained—An Exploration of Virtual Influencers and Realness, Forbes (Nov. 17, 2020), The Problematic Fakery Of Lil Miquela Explained—An Exploration Of Virtual Influencers and Realness (forbes.com); Lil Miquela, a “19-year-old robot,” currently has 2.8 million followers on Instagram. ↩︎
    19. Guides Concerning the Use of Endorsements and Testimonials in Advertising, A Proposed Rule by the Federal Trade Commission, Federal Register (Jul. 26, 2022), Federal Register :: Guides Concerning the Use of Endorsements and Testimonials in Advertising. ↩︎
    20.  Federal Register :: Guides Concerning the Use of Endorsements and Testimonials in Advertising (stating “The Commission proposes a modification indicating that an endorser could instead simply appear to be an individual, group, or institution. Thus, the Guides would clearly apply to endorsements by fabricated endorsers.”). ↩︎
    21.  For example, WIRED has published an official AI policy that spells out how the publication plans to use AI technology.  See How WIRED Will Use Generative AI Tools, WIRED, https://www.wired.com/about/generative-ai-policy/. - Google Search ↩︎
    22. For example, although BuzzFeed is using AI to assist with content creation, at the moment, it will not use artificial intelligence to write news stories. See Oliver Darcy, BuzzFeed says it will use AI to help create content, stock jumps 150% | CNN Business. ↩︎
    23.  See Van Buren v. United States, 141 S. Ct. 1648 (2021). ↩︎
    24. Even if the CFAA is deemed to not apply, a data scraper (someone who enables it) could still face claims under a host of legal theories including trespass to chattels, copyright infringement, misappropriation, unjust enrichment, conversion, breach of contract or breach of privacy claims. ↩︎
    25. In addition to steep financial penalties, there is also a risk of disgorgement of the outputs. The FTC required companies that used deceptive data practices to build AI models to destroy the data used as well as the models developed using such data. See Mary Ashley Salvino, ANALYSIS: FTC Privacy Authority Is Poised for Breakthrough Year, Bloomberg Law (Nov. 13, 2022),  ANALYSIS: FTC Privacy Authority Is Poised for Breakthrough Year (bloomberglaw.com). ↩︎
    26.  The Supreme Court’s upcoming ruling in a closely watched case against Google pertaining to the question of whether Section 230 protects technology platforms from liability when companies use algorithms to target users with recommendations could potentially have implications for AI providers and users.  See Transcript of Oral Argument, Gonzalez v. Google, LLC, No. 21-1333 (U.S. Feb. 21, 2023) https://www.supremecourt.gov/oral_arguments/argument_transcripts/2022/21-1333_p8k0.pdf. Justice Gorsuch stated “[. . .] artificial intelligence generates poetry, it generates polemics today. [. . .] that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected.” See also Cristiano Lima and David DiMolfetta, AI chatbots won’t enjoy tech’s legal shield, Section 230 authors say, The Washington Post (Mar. 17, 2023), AI chatbots won’t enjoy tech’s legal shield, Section 230 authors say - The Washington Post. ↩︎
    27.  See SAG-AFTRA Statement on the Use of Artificial Intelligence and Digital Doubles in Media and Entertainment, SAG-AFTRA (Mar. 17, 2023),  SAG-AFTRA Statement on the Use of Artificial Intelligence and Digital Doubles in Media and Entertainment | SAG-AFTRA (sagaftra.org); see also Pulliam-Moore, Charles, The Writers Guild of America Likens AI-generated content to plagiarism, The Verge (Mar. 22, 2023), The Writers Guild of America likens AI-generated content to plagiarism - The Verge. ↩︎
    28. Ethical self-regulatory frameworks may help to provide conceptual guidance.  One such example is The Partnership on AI’s Responsible Practices for Synthetic Media, a set of recommendations to support the ethical and responsible development and deployment of synthetic media. See PAI’s Responsible Practices for Synthetic Media, A Framework for Collective Action, Partnership on AI, PAI’s Responsible Practices for Synthetic Media - Partnership on AI - Synthetic Media (last visited Mar. 15, 2023); see also Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST (Jan. 2023), Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov). ↩︎
    29.  Rao, Dana, Responsible innovation in the age of generative AI, Adobe Summit (Mar. 21, 2023), https://blog.adobe.com/en/publish/2023/03/21/responsible-innovation-age-of-generative-ai↩︎
    30. Commercially available tools exist that enable users to detect AI writing. Examples of such tools are: AI Writing Check, GPTZero, and AI Text Classifier. ↩︎
    31. In certain instances, disclaimers informing users that AI is being used may be required by the AI provider. For example, see Usage policies, OpenAI (Mar. 23, 2023), Usage policies (openai.com) ("Consumer-facing uses of our models... in news generation or news summarization... must provide a disclaimer to users informing them that AI is being used and of its potential limitations."). ↩︎