The Biden White House recently enacted its latest executive order designed to establish a guiding framework for generative artificial intelligence development — including content authentication and using digital watermarks to indicate when digital assets made by the Federal government are computer generated. Here’s how it and similar copy protection technologies might help content creators more securely authenticate their online works in an age of generative AI misinformation.
A quick history of watermarking
Analog watermarking techniques were first developed in Italy in 1282. Papermakers would implant thin wires into the paper mold, which would create almost imperceptibly thinner areas of the sheet which would become apparent when held up to a light. Not only were analog watermarks used to authenticate where and how a company’s products were produced, the marks could also be leveraged to pass concealed, encoded messages. By the 18th century, the technology had spread to government use as a means to prevent currency counterfeiting. Color watermark techniques, which sandwich dyed materials between layers of paper, were developed around the same period.
Though the term “digital watermarking” wasn’t coined until 1992, the technology behind it was first patented by the Muzac Corporation in 1954. The system they built, and which they used until the company was sold in the 1980s, would identify music owned by Muzac using a “notch filter” to block the audio signal at 1 kHz in specific bursts, like Morse Code, to store identification information.
Advertisement monitoring and audience measurement firms like the Nielsen Company have long used watermarking techniques to tag the audio tracks of television shows to track and understand what American households are watching. These steganographic methods have even made their way into the modern Blu-Ray standard (the Cinavia system), as well as in government applications like authenticating drivers licenses, national currencies and other sensitive documents. The Digimarc corporation, for example, has developed a watermark for packaging that prints a product’s barcode nearly-invisibly all over the box, allowing any digital scanner in line of sight to read it. It’s also been used in applications ranging from brand anti-counterfeiting to enhanced material recycling efficiencies.
The here and now
Modern digital watermarking operates on the same principles, imperceptibly embedding added information onto a piece of content (be it image, video or audio) using special encoding software. These watermarks are easily read by machines but are largely invisible to human users. The practice differs from existing cryptographic protections like product keys or software protection dongles in that watermarks don’t actively prevent the unauthorized alteration or duplication of a piece of content, but rather provide a record of where the content originated or who the copyright holder is.
The system is not perfect, however. “There is nothing, literally nothing, to protect copyrighted works from being trained on [by generative AI models], except the unverifiable, unenforceable word of AI companies,” Dr. Ben Zhao, Neubauer Professor of Computer Science at University of Chicago, told Engadget via email.
“There are no existing cryptographic or regulatory methods to protect copyrighted works — none,” he said. “Opt-out lists have been made made a mockery by stability.ai (they changed the model name to SDXL to ignore everyone who signed up to opt out of SD 3.0), and Facebook/Meta, who responded to users on their recent opt-out list with a message that said ‘you cannot prove you were already trained into our model, therefore you cannot opt out.’”
Zhao says that while the White House’s executive order is “ambitious and covers tremendous ground,” plans laid out to date by the White House have lacked much in the way of “technical details on how it would actually achieve the goals it set.”
He notes that “there are plenty of companies who are under no regulatory or legal pressure to bother watermarking their genAI output. Voluntary measures do not work in an adversarial setting where the stakeholders are incentivized to avoid or bypass regulations and oversight.”
“Like it or not, commercial companies are designed to make money, and it is in their best interests to avoid regulations,” he added.
We could also very easily see the next presidential administration come into office and dismantle Biden’s executive order and all of the federal infrastructure that went into implementing it, since an executive order lacks the constitutional standing of congressional legislation. But don’t count on the House and Senate doing anything about the issue either.
“Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” Anu Bradford, a law professor at Columbia University, told MIT Tech Review. So far, enforcement mechanisms for these watermarking schemes have been generally limited to pinky swears by the industry’s major players.
How Content Credentials work
With the wheels of government turning so slowly, industry alternatives are proving necessary. Microsoft, the New York Times, CBC/Radio-Canada and the BBC began Project Origin in 2019 to protect the integrity of content, regardless of the platform on which it’s consumed. At the same time, Adobe and its partners launched the Content Authenticity Initiative (CAI), approaching the issue from the creator’s perspective. Eventually CAI and Project Origin combined their efforts to create the Coalition for Content Provenance and Authenticity (C2PA). From this coalition of coalitions came Content Credentials (“CR” for short), which Adobe announced at its Max event in 2021.
CR attaches additional information about an image whenever it is exported or downloaded in the form of a cryptographically secure manifest. The manifest pulls data from the image or video header — the creator’s information, where it was taken, when it was taken, what device took it, whether generative AI systems like DALL-E or Stable Diffusion were used and what edits have been made since — allowing websites to check that information against provenance claims made in the manifest. When combined with watermarking technology, the result is a unique authentication method that cannot be easily stripped like EXIF and metadata (i.e. the technical details automatically added by the software or device that took the image) when uploaded to social media sites (on account of the cryptographic file signing). Not unlike blockchain technology!
Metadata doesn’t typically survive common workflows as content is shuffled around the internet because, Digimarc Chief Product Officer Ken Sickles explained to Engadget, many online systems weren’t built to support or read them and so simply ignore the data.
“The analogy that we’ve used in the past is one of an envelope,” Chief Technology Officer of Digimarc, Tony Rodriguez told Engadget. Like an envelope, the valuable content that you want to send is placed inside “and that’s where the watermark sits. It’s actually part of the pixels, the audio, of whatever that media is. Metadata, all that other information, is being written on the outside of the envelope.”
Should someone manage to remove the watermark (turns out, not that difficult, just screenshot the image and crop out the icon) the credentials can be reattached through Verify, which runs machine vision algorithms against an uploaded image to find matches in its repository. If the uploaded image can be identified, the credentials get reapplied. If a user encounters the image content in the wild, they can check its credentials by clicking on the CR icon to pull up the full manifest and verify the information for themselves and make a more informed decision about what online content to trust.
Sickles envisions these authentication systems operating in coordinating layers, like a home security system that pairs locks and deadbolts with cameras and motion sensors to increase its coverage. “That’s the beauty of Content Credentials and watermarks together,” Sickles said. “They become a much, much stronger system as a basis for authenticity and understanding providence around an image” than they would individually.” Digimarc freely distributes its watermark detection tool to generative AI developers, and is integrating the Content Credentials standard into its existing Validate online copy protection platform.
In practice, we’re already seeing the standard being incorporated into physical commercial products like the Leica M11-P which will automatically affix a CR credential to images as they’re taken. The New York Times has explored its use in journalistic endeavors, Reuters employed it for its ambitious 76 Days feature and Microsoft has added it to Bing Image Creator and Bing AI chatbot as well. Sony is reportedly working to incorporate the standard in its Alpha 9 III digital cameras, with enabling firmware updates Alpha 1 and Alpha 7S III models arriving in 2024. CR is also available in Adobe’s expansive suite of photo and video editing tools including Illustrator, Adobe Express, Stock and Behance. The company’s own generative AI, Firefly, will automatically include non-personally identifiable information in a CR for some features like generative fill (essentially noting that the generative feature was used, but not by whom) but will otherwise be opt-in.
That said, the C2PA standard and front-end Content Credentials are barely out of development and currently exceedingly difficult to find on social media. “I think it really comes down to the wide-scale adoption of these technologies and where it’s adopted; both from a perspective of attaching the content credentials and inserting the watermark to link them,” Sickles said.
Nightshade: The CR alternative that’s deadly to databases
Some security researchers have had enough waiting around for laws to be written or industry standards to take root, and have instead taken copy protection into their own hands. Teams from the University of Chicago’s SAND Lab, for example, have developed a pair of downright nasty copy protection systems for use specifically against generative AIs.
Zhao and his team have developed Glaze, a system for creators that disrupts a generative AI’s style of mimicry (by exploiting the concept of adversarial examples). It can change the pixels in a given artwork in a way that is undetectable by the human eye but which appear radically different to a machine vision system. When a generative AI system is trained on these “glazed” images, it becomes unable to exactly replicate the intended style of art — cubism becomes cartoony, abstract styles are transformed into anime. This could prove a boon to well-known and often-imitated artists especially, in keeping their branded artistic styles commercially safe.
While Glaze focuses on preventative actions to deflect the efforts of illicit data scrapers, SAND Lab’s newest tool is whole-heartedly punitive. Dubbed Nightshade, the system will subtly change the pixels in a given image but instead of confusing the models it’s trained with like Glaze does, the poisoned image will corrupt the training database its ingested into wholesale, forcing developers to go back through and manually remove each damaging image to resolve the issue — otherwise the system will simply retrain on the bad data and suffer the same issues again.
The tool is meant as a “last resort” for content creators but cannot be used as a vector of attack. “This is the equivalent of putting hot sauce in your lunch because someone keeps stealing it out of the fridge,” Zhao argued.
Zhao has little sympathy for the owners of models that Nightshade damages. “The companies who intentionally bypass opt-out lists and do-not-scrape directives know what they are doing,” he said. “There is no ‘accidental’ download and training on data. It takes a lot of work and full intent to take someone’s content, download it and train on it.”
This article originally appeared on Engadget at https://www.engadget.com/can-digital-watermarking-protect-us-from-generative-ai-184542396.html?src=rss The Biden White House recently enacted its latest executive order designed to establish a guiding framework for generative artificial intelligence development — including content authentication and using digital watermarks to indicate when digital assets made by the Federal government are computer generated. Here’s how it and similar copy protection technologies might help content creators more securely authenticate their online works in an age of generative AI misinformation.
A quick history of watermarking
Analog watermarking techniques were first developed in Italy in 1282. Papermakers would implant thin wires into the paper mold, which would create almost imperceptibly thinner areas of the sheet which would become apparent when held up to a light. Not only were analog watermarks used to authenticate where and how a company’s products were produced, the marks could also be leveraged to pass concealed, encoded messages. By the 18th century, the technology had spread to government use as a means to prevent currency counterfeiting. Color watermark techniques, which sandwich dyed materials between layers of paper, were developed around the same period.
Though the term “digital watermarking” wasn’t coined until 1992, the technology behind it was first patented by the Muzac Corporation in 1954. The system they built, and which they used until the company was sold in the 1980s, would identify music owned by Muzac using a “notch filter” to block the audio signal at 1 kHz in specific bursts, like Morse Code, to store identification information.
Advertisement monitoring and audience measurement firms like the Nielsen Company have long used watermarking techniques to tag the audio tracks of television shows to track and understand what American households are watching. These steganographic methods have even made their way into the modern Blu-Ray standard (the Cinavia system), as well as in government applications like authenticating drivers licenses, national currencies and other sensitive documents. The Digimarc corporation, for example, has developed a watermark for packaging that prints a product’s barcode nearly-invisibly all over the box, allowing any digital scanner in line of sight to read it. It’s also been used in applications ranging from brand anti-counterfeiting to enhanced material recycling efficiencies.
The here and now
Modern digital watermarking operates on the same principles, imperceptibly embedding added information onto a piece of content (be it image, video or audio) using special encoding software. These watermarks are easily read by machines but are largely invisible to human users. The practice differs from existing cryptographic protections like product keys or software protection dongles in that watermarks don’t actively prevent the unauthorized alteration or duplication of a piece of content, but rather provide a record of where the content originated or who the copyright holder is.
The system is not perfect, however. “There is nothing, literally nothing, to protect copyrighted works from being trained on [by generative AI models], except the unverifiable, unenforceable word of AI companies,” Dr. Ben Zhao, Neubauer Professor of Computer Science at University of Chicago, told Engadget via email.
“There are no existing cryptographic or regulatory methods to protect copyrighted works — none,” he said. “Opt-out lists have been made made a mockery by stability.ai (they changed the model name to SDXL to ignore everyone who signed up to opt out of SD 3.0), and Facebook/Meta, who responded to users on their recent opt-out list with a message that said ‘you cannot prove you were already trained into our model, therefore you cannot opt out.’”
Zhao says that while the White House’s executive order is “ambitious and covers tremendous ground,” plans laid out to date by the White House have lacked much in the way of “technical details on how it would actually achieve the goals it set.”
He notes that “there are plenty of companies who are under no regulatory or legal pressure to bother watermarking their genAI output. Voluntary measures do not work in an adversarial setting where the stakeholders are incentivized to avoid or bypass regulations and oversight.”
“Like it or not, commercial companies are designed to make money, and it is in their best interests to avoid regulations,” he added.
We could also very easily see the next presidential administration come into office and dismantle Biden’s executive order and all of the federal infrastructure that went into implementing it, since an executive order lacks the constitutional standing of congressional legislation. But don’t count on the House and Senate doing anything about the issue either.
“Congress is deeply polarized and even dysfunctional to the extent that it is very unlikely to produce any meaningful AI legislation in the near future,” Anu Bradford, a law professor at Columbia University, told MIT Tech Review. So far, enforcement mechanisms for these watermarking schemes have been generally limited to pinky swears by the industry’s major players.
How Content Credentials work
With the wheels of government turning so slowly, industry alternatives are proving necessary. Microsoft, the New York Times, CBC/Radio-Canada and the BBC began Project Origin in 2019 to protect the integrity of content, regardless of the platform on which it’s consumed. At the same time, Adobe and its partners launched the Content Authenticity Initiative (CAI), approaching the issue from the creator’s perspective. Eventually CAI and Project Origin combined their efforts to create the Coalition for Content Provenance and Authenticity (C2PA). From this coalition of coalitions came Content Credentials (“CR” for short), which Adobe announced at its Max event in 2021.
CR attaches additional information about an image whenever it is exported or downloaded in the form of a cryptographically secure manifest. The manifest pulls data from the image or video header — the creator’s information, where it was taken, when it was taken, what device took it, whether generative AI systems like DALL-E or Stable Diffusion were used and what edits have been made since — allowing websites to check that information against provenance claims made in the manifest. When combined with watermarking technology, the result is a unique authentication method that cannot be easily stripped like EXIF and metadata (i.e. the technical details automatically added by the software or device that took the image) when uploaded to social media sites (on account of the cryptographic file signing). Not unlike blockchain technology!
Metadata doesn’t typically survive common workflows as content is shuffled around the internet because, Digimarc Chief Product Officer Ken Sickles explained to Engadget, many online systems weren’t built to support or read them and so simply ignore the data.
“The analogy that we’ve used in the past is one of an envelope,” Chief Technology Officer of Digimarc, Tony Rodriguez told Engadget. Like an envelope, the valuable content that you want to send is placed inside “and that’s where the watermark sits. It’s actually part of the pixels, the audio, of whatever that media is. Metadata, all that other information, is being written on the outside of the envelope.”
Should someone manage to remove the watermark (turns out, not that difficult, just screenshot the image and crop out the icon) the credentials can be reattached through Verify, which runs machine vision algorithms against an uploaded image to find matches in its repository. If the uploaded image can be identified, the credentials get reapplied. If a user encounters the image content in the wild, they can check its credentials by clicking on the CR icon to pull up the full manifest and verify the information for themselves and make a more informed decision about what online content to trust.
Sickles envisions these authentication systems operating in coordinating layers, like a home security system that pairs locks and deadbolts with cameras and motion sensors to increase its coverage. “That’s the beauty of Content Credentials and watermarks together,” Sickles said. “They become a much, much stronger system as a basis for authenticity and understanding providence around an image” than they would individually.” Digimarc freely distributes its watermark detection tool to generative AI developers, and is integrating the Content Credentials standard into its existing Validate online copy protection platform.
In practice, we’re already seeing the standard being incorporated into physical commercial products like the Leica M11-P which will automatically affix a CR credential to images as they’re taken. The New York Times has explored its use in journalistic endeavors, Reuters employed it for its ambitious 76 Days feature and Microsoft has added it to Bing Image Creator and Bing AI chatbot as well. Sony is reportedly working to incorporate the standard in its Alpha 9 III digital cameras, with enabling firmware updates Alpha 1 and Alpha 7S III models arriving in 2024. CR is also available in Adobe’s expansive suite of photo and video editing tools including Illustrator, Adobe Express, Stock and Behance. The company’s own generative AI, Firefly, will automatically include non-personally identifiable information in a CR for some features like generative fill (essentially noting that the generative feature was used, but not by whom) but will otherwise be opt-in.
That said, the C2PA standard and front-end Content Credentials are barely out of development and currently exceedingly difficult to find on social media. “I think it really comes down to the wide-scale adoption of these technologies and where it’s adopted; both from a perspective of attaching the content credentials and inserting the watermark to link them,” Sickles said.
Nightshade: The CR alternative that’s deadly to databases
Some security researchers have had enough waiting around for laws to be written or industry standards to take root, and have instead taken copy protection into their own hands. Teams from the University of Chicago’s SAND Lab, for example, have developed a pair of downright nasty copy protection systems for use specifically against generative AIs.
Zhao and his team have developed Glaze, a system for creators that disrupts a generative AI’s style of mimicry (by exploiting the concept of adversarial examples). It can change the pixels in a given artwork in a way that is undetectable by the human eye but which appear radically different to a machine vision system. When a generative AI system is trained on these “glazed” images, it becomes unable to exactly replicate the intended style of art — cubism becomes cartoony, abstract styles are transformed into anime. This could prove a boon to well-known and often-imitated artists especially, in keeping their branded artistic styles commercially safe.
While Glaze focuses on preventative actions to deflect the efforts of illicit data scrapers, SAND Lab’s newest tool is whole-heartedly punitive. Dubbed Nightshade, the system will subtly change the pixels in a given image but instead of confusing the models it’s trained with like Glaze does, the poisoned image will corrupt the training database its ingested into wholesale, forcing developers to go back through and manually remove each damaging image to resolve the issue — otherwise the system will simply retrain on the bad data and suffer the same issues again.
The tool is meant as a “last resort” for content creators but cannot be used as a vector of attack. “This is the equivalent of putting hot sauce in your lunch because someone keeps stealing it out of the fridge,” Zhao argued.
Zhao has little sympathy for the owners of models that Nightshade damages. “The companies who intentionally bypass opt-out lists and do-not-scrape directives know what they are doing,” he said. “There is no ‘accidental’ download and training on data. It takes a lot of work and full intent to take someone’s content, download it and train on it.”This article originally appeared on Engadget at https://www.engadget.com/can-digital-watermarking-protect-us-from-generative-ai-184542396.html?src=rss Read More Software, Technology & Electronics, site|engadget, provider_name|Engadget, region|US, language|en-US, author_name|Andrew Tarantola Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics