How can we address the risks of generative AI being misused to create Child Sexual Abuse Materials (CSAM)? Thorn’s latest case study as part of PAI’s Synthetic Media Framework tackles this critical challenge. They highlight: 🔹 Risks of CSAM in training data or fine-tuning by bad actors. 🔹 Harms beyond the content itself, such as victim revictimization and barriers to harm reduction. 🔹 Steps Builders and hosting sites can take to prevent misuse of generative AI models. Together, we can ensure generative AI tools are designed responsibly. Explore this case study and others: https://buff.ly/49Ebi5p #GenerativeAI #AI #SyntheticMedia
Partnership on AI’s Post
More Relevant Posts
-
Did you miss the livestream today on the discussion to proactively mitigate the misuse of #AI? Not to worry, you can catch the recording down below. Thorn, All Tech Is Human, Google, OpenAI and Stability AI shared the new Safety by Design Generative AI Principles to prevent child sexual abuse. https://lnkd.in/g63qrxTY #safetybydesign #trustandsafety
Generative AI Principles to Prevent Child Sexual Abuse
https://www.youtube.com/
To view or add a comment, sign in
-
Child predators are using AI to create sexual images of their favorite stars: My body will never be mine again Safety groups say they’re increasingly finding chats about creating images based on past child sexual abuse materialPredators active on the dark web are increasingly using artificial intelligence to create sexually explicit images of children, fixat... https://lnkd.in/ehcZnfXe #AI #ML #Automation
Child predators are using AI to create sexual images of their favorite stars: My body will never be mine again
openexo.com
To view or add a comment, sign in
-
AI-generated images of child sexual abuse are a new threat that is being circulated and sold online. By joining IWF, you can access our Image Hash List to stop your AI technology and tools being trained on CSAM and exploited to produce this material. Read more about our Hash List service: https://lnkd.in/dnDaUeYS. Partner with the IWF and take a zero-tolerance approach to your AI technology and services being used to produce images of child sexual abuse. Visit iwf.org.uk/aiproviders to find out more and get in touch! #aicsam #imagegeneration #childexploitation #ai
To view or add a comment, sign in
-
In a new case study published with Partnership on AI, HAI scholars Riana Pfefferkorn and Caroline Meinhardt argue that direct disclosure mechanisms (e.g., content labels, watermarking) can’t fix all the harms of synthetic media. They highlight AI-generated Child Sexual Abuse Material (AIG-CSAM) as an example of synthetic content that is inherently harmful and circulated by bad actors who are generally not incentivized to apply direct disclosure techniques. Even when labels or watermarks are built into a model’s output, bad actors may fine tune models to circumvent disclosures. Nevertheless, direct disclosure can still help mitigate some of AIG-CSAM's harms. For example, knowing no real children are depicted can help child protection organizations and law enforcement prioritize their resources. Ultimately, though, AI builders must supplement their intervention measures at the content creation stage. Read the case study: https://lnkd.in/g8p477aK
Direct disclosure has limited impact on AI-generated Child Sexual Abuse Material — an analysis by researchers at Stanford HAI - Partnership on AI
partnershiponai.org
To view or add a comment, sign in
-
Hot off the presses, my colleague Caroline Meinhardt and I have a new publication exploring the technical and policy challenges of AI-generated CSAM. Please read and share!
In a new case study published with Partnership on AI, HAI scholars Riana Pfefferkorn and Caroline Meinhardt argue that direct disclosure mechanisms (e.g., content labels, watermarking) can’t fix all the harms of synthetic media. They highlight AI-generated Child Sexual Abuse Material (AIG-CSAM) as an example of synthetic content that is inherently harmful and circulated by bad actors who are generally not incentivized to apply direct disclosure techniques. Even when labels or watermarks are built into a model’s output, bad actors may fine tune models to circumvent disclosures. Nevertheless, direct disclosure can still help mitigate some of AIG-CSAM's harms. For example, knowing no real children are depicted can help child protection organizations and law enforcement prioritize their resources. Ultimately, though, AI builders must supplement their intervention measures at the content creation stage. Read the case study: https://lnkd.in/g8p477aK
Direct disclosure has limited impact on AI-generated Child Sexual Abuse Material — an analysis by researchers at Stanford HAI - Partnership on AI
partnershiponai.org
To view or add a comment, sign in
-
Such an important issue and one all of us, whether we work in the sector or not, should be well informed about and be clear on. Please do read our blog! Nici
📣 New blog post: what is #AI generated child sexual abuse material? And how to use your existing professional skills to respond to protect children. Artificial intelligence tools are developing at an astonishing speed, and today anyone with access to the internet can easily produce convincing images of almost anything. Sadly, #AI is also already being used for harmful and illegal purposes - including in the creation of sexual imagery of children. To help professionals, our Assistant Director Lisa McCrindle sheds light on this technology in a new post for the CSA Centre blog. Lisa covers many of the questions you may have about artificially generated child sexual abuse material, including what it is, whether it is illegal, and how to respond to concerns involving the children and young people, and adults, you work with. Read the blog post on the CSA Centre website today.
To view or add a comment, sign in
-
Artificially generated (AI) child sexual abuse material is a rapidly growing area of concern. Last month, the Internet Watch Foundation recorded 3000 images of child sexual abuse, which were generated by AI technology. This is only likely to rise, given the development of AI tools which can be easily accessed and utilised. This blog by the CSA centre below is really useful in helping us gain more of an understanding of this issue and how me might respond to this emerging challenge.
📣 New blog post: what is #AI generated child sexual abuse material? And how to use your existing professional skills to respond to protect children. Artificial intelligence tools are developing at an astonishing speed, and today anyone with access to the internet can easily produce convincing images of almost anything. Sadly, #AI is also already being used for harmful and illegal purposes - including in the creation of sexual imagery of children. To help professionals, our Assistant Director Lisa McCrindle sheds light on this technology in a new post for the CSA Centre blog. Lisa covers many of the questions you may have about artificially generated child sexual abuse material, including what it is, whether it is illegal, and how to respond to concerns involving the children and young people, and adults, you work with. Read the blog post on the CSA Centre website today.
To view or add a comment, sign in
-
📣 New blog post: what is #AI generated child sexual abuse material? And how to use your existing professional skills to respond to protect children. Artificial intelligence tools are developing at an astonishing speed, and today anyone with access to the internet can easily produce convincing images of almost anything. Sadly, #AI is also already being used for harmful and illegal purposes - including in the creation of sexual imagery of children. To help professionals, our Assistant Director Lisa McCrindle sheds light on this technology in a new post for the CSA Centre blog. Lisa covers many of the questions you may have about artificially generated child sexual abuse material, including what it is, whether it is illegal, and how to respond to concerns involving the children and young people, and adults, you work with. Read the blog post on the CSA Centre website today.
To view or add a comment, sign in
-
🤖 How is #AI being abused to create child sexual abuse imagery? Child sexual abuse images generated using artificial intelligence is a growing area of concern. A key finding from our research in this field found that most AI CSAM is now realistic enough to be treated as ‘real’ CSAM. The most convincing AI CSAM is visually indistinguishable from real CSAM. Read the full report and recommendations at iwf.org.uk/aireport.
To view or add a comment, sign in
-
The impact of AI generated child sexual abuse images on victims and survivors cannot be overstated. When images or videos of child sexual abuse are created, the permanency and lack of control over who sees them creates significant and long-term impacts for those with lived experience. They are revictimised every time these are viewed, and this is no different with AI images. We urgently need big tech and government to take a joint approach to regulate the use of AI tools. Read the full report from Internet Watch Foundation (IWF) here: https://lnkd.in/d6x8TWA2
To view or add a comment, sign in
20,056 followers