The White House released a today outlining commitments that several AI companies are making to curb the creation and distribution of image-based sexual abuse. The participating businesses have laid out the steps they are taking to prevent their platforms from being used to generate non-consensual intimate images (NCII) of adults and child sexual abuse material (CSAM).
Specifically, Adobe, Anthropic, Cohere, Common Crawl, Microsoft and OpenAI said they’ll be:
All of the aforementioned except Common Crawl also agreed they’d be:
-
“incorporating feedback loops and iterative stress-testing strategies in their development processes, to guard against AI models outputting image-based sexual abuse”
-
And “removing nude images from AI training datasets” when appropriate.
It’s a voluntary commitment, so today’s announcement doesn’t create any new actionable steps or consequences for failing to follow through on those promises. But it’s still worth applauding a good faith effort to tackle this serious problem. The notable absences from today’s White House release are Apple, Amazon, Google and Meta.
Many big tech and AI companies have been making strides to make it easier for victims of NCII to stop the spread of deepfake images and videos separately from this federal effort. StopNCII has with for a comprehensive approach to scrubbing this content, while other businesses are rolling out proprietary tools for reporting AI-generated image-based sexual abuse on their platforms.
If you believe you’ve been the victim of non-consensual intimate image-sharing, you can open a case with StopNCII ; if you’re below the age of 18, you can file a report with NCMEC .
Trending Products