Skip to main content

Authors collaborating with AI

Published onNov 01, 2021
Authors collaborating with AI
·

This document is the workspace for exploring the working group’s position on the topic of “Authors Collaborating With AI”.


Previous CC publications have not specifically addressed “Authors Collaborating With AI.” However, notes from previous CC publications about “AI Generations / Creations” deal with many of the same issues and can be used as a starting point here. Those notes are reproduced below, in part.

In sum, the act of collaborating with a human complicates the I/O aspect of AI’s relationship with copyright. Not because the AI itself can claim copyright, but because the human can claim copyright in what would otherwise be public domain.

For instance, a human publishes a work claiming authorship, but the human actually secretly generated the work with AI, thus the work should be public domain, however we have no way of knowing that to be the case, and so the human is granted authorship.

Or, the rights holder of a sufficiently delineated fictional character, a copyrighted character, publishes AI generated work that tells a story about the copyrighted character, and as such the rights holder of the copyrighted character enforces rights to the copyrighted character, rights that are inseparably woven into the otherwise public domain AI generated work.

The current incentive mechanisms to develop AI technologies might not be sufficient for the development of healthy AI/human collaborations, especially in arts & education. Further, given “NFTs” (Non-Fungible Tokens) are already fast-growing global financial market valuated & liquidatable sui generis IP assets, with DAOs providing new ways to collaborate, it is possible “Web3” will provide new incentive mechanisms and support structure for AI technological development, especially for complex human-machine I/O interactions that overlap with copyright law and collaborative fictional character development.


For “AI Generations / Creations” alone, concepts like “a certain level of human input is always required” help clarify CC’s position when authors are not an active part of the AI’s process. In other words, given relative simplicity, “human input” concepts in a “human/machine no-collaboration” environment help delineate clear lines between human-author or AI contributions.

However, when authors are an active part of the AI’s process, concepts like “a certain level of human input is always required” turn into difficult questions that likely have no easy answer for all situations. In other words, given relative complexity, “human input” concepts in a “human/machine yes-collaboration” environment explore when we cannot delineate clear lines between human-author or AI contributions.

Where will this “collaboration” complexity most likely arise, complexity from “when a certain level of human input is [always] required?

A live performance that is real-time fixated in an audio/video recording is a likely place where this type of complexity will arise, especially when a performance relies on performer improvisation.

Human inputs in this type of situation could include a programmer “hot swapping” various I/O components within the AI during a performance. Or the AI could respond in real-time to the humans it is performing alongside. For instance, a dancing robot that responds to the movements of its human dance partner, while the human dance partner responds to the movements of the dancing robot; throughout the dance, there are many inputs & outputs shared between human and machine.

Though these types of collaborations are on the face more complex than “AI Generations / Creations” alone, they do not seem like a reason to give AI authorship rights. This is especially true given that human-authors can contract out percentages of their royalties to contributing engineers, as they do in the music industry for people with technical but not creative input.

People have the potential to use AI as “ghost authors” without divulging that fact to the public. In this scenario, a person would be considered the author without our knowledge of how much creative input that person actually contributed to the end work. This could get very complex if a work is a joint authorship, with one author unaware the other contributor used AI to generate their contribution.

Perhaps artist disclosure about AI collaboration is more an issue of evidence and privacy, such as trade secret, rather than an issue of copyright. Even if not a direct issue of copyright, this issue could indirectly cause an artistic expression chilling effect within content platforms, for instance misuse with automated takedown systems by publishing large catalogues of generative art and then falsely using them to claim infringement.

There is also the question of whether the AI is generating content that substantially samples copyrighted works or not. It is easy to envision a large media platform using AI to generate exclusive content using the fictional character rights it owns in the AI generation, with the full expectation of enforcing those fictional character rights within the AI generated work.

Though incentives already exist to encourage the development of AI generation, those incentives might influence AI generation in certain fields but not others. For instance, more AI generation in war than in arts & education. Or those incentives might encourage false narratives, like the undisclosed “ghost writer” idea contemplated above. Further, AI incentive mechanism complexity might impact “Authors Collaborating with AI” more than “AI Generations / Creations” alone.

This is not to say that extending copyright law to cover AI generated art is the appropriate vehicle to encourage AI development in areas without sufficient existing incentives. For instance, instead of extending copyright law, various industries could establish their own contract conventions on top of copyright that appropriately share royalties with value-add contributors who might be more technical than creative but are nevertheless invaluable to the project.

People are already monetizing “hands-off” generative art with NFTs. Some of these generative art NFTs require the purchaser to provide a personalized input “seed” in order to output the unique NFT. Since many of these generative art “minting” vendors use automated algorithms to generate the art, it is easy to imagine AI algorithms also included in the generative process, either disclosed or undisclosed by the programmer.

We are still determining how generative art NFTs interact with copyright law, but they do indicate these Global markets are developing their own solutions to monetize digital art. Ultimately, these solutions might find writ in contract law or some kind of sui generis rights/obligations law, perhaps based on the physical possession of private keys that are pleaded as direct tort matters.

If these Global NFT markets have enough demand, CC could help clarify that the copyright side of an NFT’s IP should be treated as a public good, regardless of how original & creative of an I/O minting process event.

DAOs are a another nascent technology that is impacting the development of AI. For instance, members of a DAO can band together and decide on what AI generative art has enough meaning to be worth selling as an NFT, with each member contributing creativity and originality by voting on various criteria. Moreover, all the members of a DAO could be accomplished artists in their own right, who perceive AI as another medium to express and DAOs as a way to do it with shared knowledge and skill in a craft.

Finally, issue that could arise is that copyrighted databases do not have to comply with copyright originality criteria. Web3 relies on “subgraph” technology to provide decentral database solutions, which has interesting implications if AI becomes involved in generating and maintaining various subgraphs.

Examples

  • Dancing robots

  • “New Rembrandt”

  • Github Co-pilot

Comments
0
comment
No comments here
Why not start the discussion?