“Copyright traps” could tell writers if an AI has scraped their work

Weekly Business Insights from Top Ten Business Magazines

Extractive summaries and key takeaways from the articles curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since 2017 | Week 359 |  July 26-August 1, 2024 | Archive

“Copyright traps” could tell writers if an AI has scraped their work

By Melissa Heikkilä | MIT Technology Review | July 25, 2024

Extractive Summary of the Article | Read | Listen

Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set.   Now they have a new way to prove it: “copyright traps” developed by a team at Imperial College London, pieces of hidden text that allow writers and publishers to subtly mark their work in order to later detect whether it has been used in AI models or not. The idea is similar to traps that have been used by copyright holders throughout history—strategies like including fake locations on a map or fake words in a dictionary. 

These AI copyright traps tap into one of the biggest fights in AI. A number of publishers and writers are in the middle of litigation against tech companies, claiming their intellectual property has been scraped into AI training data sets without their permission. The New York Times’ ongoing case against OpenAI is probably the most high-profile of these.  The code to generate and detect traps is currently available on GitHub, but the team also intends to build a tool that allows people to generate and insert copyright traps themselves. 

The traps are not foolproof. A motivated attacker who knows about a trap can remove them.   Whether they can remove all of them or not is an open question, and that’s likely to be a bit of a cat-and-mouse game. But even then, the more traps are applied, the harder it becomes to remove all of them without significant engineering resources.

It’s important to keep in mind that copyright traps may only be a stopgap solution, or merely an inconvenience to model trainers.   One can not release a piece of content containing a trap and have any assurance that it will be an effective trap forever.

2 key takeaways from the article 

  1. Since the beginning of the generative AI boom, content creators have argued that their work has been scraped into AI models without their consent. But until now, it has been difficult to know whether specific text has actually been used in a training data set.   Now they have a new way to prove it: “copyright traps” developed by a team at Imperial College London, pieces of hidden text that allow writers and publishers to subtly mark their work in order to later detect whether it has been used in AI models or not. The idea is similar to traps that have been used by copyright holders throughout history—strategies like including fake locations on a map or fake words in a dictionary. 
  2. It’s important to keep in mind that copyright traps may only be a stopgap solution, or merely an inconvenience to model trainers.   One can not release a piece of content containing a trap and have any assurance that it will be an effective trap forever.

Full Article

(Copyright lies with the publisher)

Topics: Technology, Artificial Intelligence, Intellectual Property Rights, Creativity, Publications

Be the first to comment

Leave a Reply