Podcast Episode 177

Protecting Your Work From AI Training Without Killing Your Visibility

Host: Meredith's Husband

How I Recommend Photographers Protect Their Work From AI Training

Author: Meredith's Husband
Many photographers and visual artists want to prevent their work from being used to train AI models. At the same time, they still want their websites and businesses to appear in search results, including AI-powered search tools. I recommend focusing on platform-level opt-outs, not technical website blocks. This approach helps reduce future AI training use while preserving discovery and visibility. This guide explains what opting out actually does, what I recommend avoiding, and where creators can submit opt-out requests today.

Checklist: How I Recommend Opting Out of AI Training

You do not need to complete every step at once. Even completing a few items helps establish clear non-consent. I recommend starting with the platforms and tools you personally use.

AI Tools and Chat Platforms

  • OpenAI (ChatGPT, DALL·E)
    Submit a privacy request to opt out of future training:
    https://privacy.openai.com/

  • ChatGPT Account Setting
    Open ChatGPT → Settings → Data Controls → Turn off “Improve the model for everyone”

  • Google Gemini
    Turn off Gemini Apps Activity to limit future training use:
    https://myactivity.google.com/product/gemini

  • Anthropic (Claude)
    Open Claude → Settings → Turn off “Help improve Claude”


Social and Creative Platforms


What I Do Not Recommend

  • Blocking AI bots via robots.txt
  • Site-wide crawl or indexing restrictions
  • Relying on NoAI or NoImageAI meta tags site-wide


Ongoing Awareness


Revisit these settings periodically as policies change

Opting out does not remove past training data and does not guarantee enforcement. It does create a documented record of non-consent that may matter as regulations evolve.


Disclaimer

This content is informational only and does not constitute legal advice. AI policies, platform tools, and enforcement practices change frequently. Always review official platform documentation and consult a qualified professional if you are making decisions that affect your business or intellectual property

What Does “Opting Out of AI Training” Mean?

Opting out of AI training means requesting that a company or platform does not use your content to train future AI models. Most opt-out systems apply only to future training and do not affect models that have already been trained.

  • Opting out does not remove content from existing AI models.
  • Opting out does not guarantee enforcement.
  • Opting out does create a documented record of non-consent.


What Is the Difference Between AI Training and AI Search?

AI training is the process of teaching models patterns using large datasets. AI search and discovery is the process of surfacing websites, creators, or businesses in response to user questions.

Most photographers want to opt out of training while remaining visible in discovery. This distinction is critical when deciding which opt-out methods to use.


Why I Do Not Recommend Blocking AI Bots Using robots.txt

I do not recommend blocking AI bots using robots.txt for photographers or small creative businesses. robots.txt controls whether bots can crawl a website at all.

Blocking AI crawlers can prevent your site from being indexed, referenced, or surfaced in AI-powered search results.

This can reduce visibility, discovery, and long-term findability.

Because AI search is increasingly used to find local services, blocking crawlers often causes more harm than protection. My recommendation is to keep websites crawlable and focus on platform-level controls instead.


What I Recommend Instead

I recommend using platform-specific privacy and training opt-out tools provided by AI companies and social platforms.
 These opt-outs affect future training without blocking discovery or indexing.

  • They apply to future training only.
  • They do not block search visibility.
  • They do not require website changes.


OpenAI (ChatGPT, DALL·E, and Related Tools)

OpenAI does not provide a single, universal guarantee that content already used in model training can be removed. However, OpenAI does provide multiple mechanisms that allow individuals and website owners to request that their content not be used for future AI training.

These controls apply to future training only and do not affect models that have already been trained.


What This Means in Practice

OpenAI offers two primary ways to limit how your content is used: (1) a privacy request portal where you can formally request that OpenAI not use your content for training, and (2) account-level controls within ChatGPT that prevent new conversations from being used to improve models. Neither option is retroactive, but both establish clear, documented non-consent.


OpenAI Privacy Request Portal (Primary Opt-Out)

OpenAI privacy request portal:
https://privacy.openai.com/

This portal allows you to submit a request asking OpenAI not to use your content for model training, and to submit other privacy requests such as data access or deletion. It also creates a documented record of objection.


ChatGPT Account-Level Training Controls

If you use ChatGPT, OpenAI provides an in-product setting to stop your conversations from being used for training. To disable training in ChatGPT: open ChatGPT, go to Settings, select Data Controls, and turn off “Improve the model for everyone.” This setting prevents new conversations from being used for training.

OpenAI Privacy Policy (Background and Scope)

OpenAI privacy policy:
https://openai.com/policies/row-privacy-policy/

Important Limitations to Understand

  • These opt-out mechanisms do not remove data from models already trained.
  • They apply only to future training by OpenAI systems.
  • They do not guarantee enforcement against all downstream uses.


Summary for Photographers and Creators

OpenAI does not offer a retroactive “delete my work from all models” option. What it does offer is a formal privacy request system, account-level controls to prevent future training, and transparency around its policies. I recommend using both the privacy portal and account controls if you use OpenAI tools.



Google Gemini (Google AI Tools and Assistants)

Google does not provide a single, universal opt-out form that guarantees your content will never be used to train its AI models. However, Google does provide controls that allow users to limit how their interactions with Gemini and related AI tools are stored and used.

These controls primarily affect future data use and do not retroactively remove content from models that have already been trained.


What This Means in Practice

Google’s approach to AI training is tied to broader Google Account activity controls. Rather than a single opt-out form, users manage AI data usage by adjusting Gemini-specific activity settings and Google Account activity controls. These settings determine whether Gemini conversations are saved and used to improve Google’s AI systems.


Gemini Privacy and Data Use Overview

Google Gemini privacy documentation:
https://support.google.com/gemini/answer/13594961

Turn Off Gemini Apps Activity (Primary Control)

Gemini Apps Activity controls:
https://myactivity.google.com/product/gemini

Turning off Gemini Apps Activity prevents future Gemini conversations from being saved to your Google account or used to improve models, though it may reduce personalization or history-based features.


Google Privacy Policy (Broader Context)

Google privacy policy:
https://policies.google.com/privacy

Important Limitations to Understand

  • These controls do not remove data from models already trained.
  • They primarily affect how future conversations are stored and reviewed.
  • They do not guarantee exclusion from all internal research or evaluation processes.


Summary for Photographers and Creators

Google does not offer a single “opt out of AI training” button for Gemini. What it does offer is Gemini-specific activity controls and documentation of AI data handling. I recommend reviewing Gemini’s privacy documentation and disabling Gemini Apps Activity if you want to reduce training use of your interactions.



Meta Platforms (Instagram, Facebook, and WhatsApp)

Meta does not currently have a simple, universal “opt-out button” that automatically excludes all your posts and data from being used to train its AI models worldwide. However, Meta does provide a way to submit objections to the use of your personal information for AI purposes through its Privacy Center in certain regions under applicable data protection laws.


What This Means in Practice

In regions with strong privacy protections, such as the European Union and the United Kingdom, Meta offers a formal objection process tied to data protection rights. In other regions, including much of the United States, this option may be limited or unavailable.

The objection process generally involves visiting Meta’s Privacy Center, navigating to sections related to “AI at Meta” or “Generative AI,” and submitting an objection form where available.


Meta Privacy Center (Starting Point)

https://www.facebook.com/privacy/center/

From this page, look for information related to AI usage, generative AI systems, or data use for AI training.


Regional Limitations to Be Aware Of

  • Some users will not see an objection form.
  • Submission does not guarantee enforcement.
  • Outcomes depend on jurisdiction.


       Even where enforcement is unclear, submitting an objection creates a documented record of non-consent, which may matter as regulations evolve.


Meta Form for Third-Party Data Used in AI

Meta “Third Party Information Used for AI” contact form:
https://www.facebook.com/help/contact/510058597920541

This form allows you to request a review if Meta’s AI systems reference or surface information originating from external sources.


Summary for Photographers and Creators

Meta does not offer a single global opt-out switch for AI training. What it does offer is a privacy-based objection process in some regions and a review mechanism for third-party data used in AI outputs. I recommend submitting an objection where available and treating this as one component of a broader, platform-level opt-out strategy.



Anthropic (Claude AI)

Anthropic does not provide a universal external opt-out form that retroactively removes content from AI models. However, Anthropic does provide user-level controls that allow individuals to prevent their conversations with Claude from being used for future model training.

These controls apply to future interactions only and do not affect models that have already been trained.


What This Means in Practice

Anthropic’s approach is managed inside the Claude interface. By adjusting a single setting, you can prevent your conversations from being used to improve or train Anthropic’s AI models going forward.


Claude Account Training Controls (Primary Opt-Out)

How to opt out of training in Claude: open Claude, go to Settings, locate the option labeled “Help improve Claude,” and turn it off. When disabled, your future conversations should not be used to train or improve Claude models.


Anthropic Privacy and Data Use Documentation

Anthropic privacy policy:
https://www.anthropic.com/privacy

Additional Context and Reporting

Independent reporting on Anthropic’s training controls:
https://www.theverge.com/2024/4/3/24119655/anthropic-privacy-claude-ai-training-data

Important Limitations to Understand

  • These controls do not remove data from models already trained.
  • They apply only to future conversations.
  • They do not affect content Anthropic may obtain from other licensed or public datasets.


Summary for Photographers and Creators

Anthropic does not offer a retroactive AI training removal process. What it does offer is a clear, in-product opt-out for future training and transparent privacy documentation. I recommend disabling the “Help improve Claude” setting if you use Claude and want to prevent your conversations from being used for training.



DeviantArt (Artist Community Platform)

DeviantArt has historically provided creators with tools to limit or opt out of AI-related uses of their artwork on the platform. However, these controls have changed over time, and availability may vary depending on current platform policies.

DeviantArt’s approach is platform-specific. Any opt-out or tagging options apply only to content hosted on DeviantArt and do not affect AI training that occurs elsewhere.


What This Means in Practice

DeviantArt has been more explicit than many platforms about acknowledging artist concerns related to AI training. At various points, the platform has offered account-level AI data preferences, artwork-level “NoAI” tags, and settings related to AI datasets. These tools signal non-consent within DeviantArt’s ecosystem, but do not guarantee enforcement outside the platform.


DeviantArt Account and Artwork Controls

Start at DeviantArt and navigate to your account settings and individual artwork options to see what AI-related controls are currently available:
https://www.deviantart.com/

Look for AI or data usage settings, options related to AI datasets, and artwork-level tags or permissions that reference AI use. Because these settings have changed over time, treat DeviantArt’s current interface as the authoritative source.


Community Reference and Historical Guide

https://www.deviantart.com/fireytika/journal/How-to-Opt-out-of-AI-data-assets-936608338

DeviantArt Policy and Terms (Background)

https://www.deviantart.com/about/policy/service

Important Limitations to Understand

  • These controls apply only to content hosted on DeviantArt.
  • They do not affect AI models trained elsewhere.
  • They may change as platform policies evolve.
  • They do not retroactively remove artwork from datasets already created.


Summary for Photographers and Visual Artists

DeviantArt does not offer a universal, permanent opt-out that applies across the AI industry. What it can offer is platform-specific AI controls and artwork-level permission signaling. I recommend reviewing your account settings and artwork permissions regularly and using any available AI opt-out or NoAI options.



Are NoAI Meta Tags Recommended?

I do not recommend using NoAI or NoImageAI meta tags site-wide. These tags are not standardized and may be interpreted inconsistently. In some cases, they may reduce indexing or retrieval.

If used at all, they should be limited to image delivery endpoints rather than service pages or core website content. For most photographers, platform-level opt-outs are safer.


Why I Still Recommend Opting Out (Even Though It’s Imperfect)

Opting out does not guarantee compliance today. However, I still recommend doing it because it establishes explicit non-consent, creates a documented record of intent, helps influence future regulation, and contributes to changing platform defaults.

Opting out is not about stopping AI. It is about refusing silent consent.


Organizations I Recommend Following for AI Regulation and Artist Protection


Meredith’s Husband is an SEO consultant with over 20 years of experience helping small businesses grow through clear, practical search strategies. He hosts Meredith’s Husband: SEO for People Who Don’t Like SEO alongside Meredith, a professional photographer, where they break down SEO and AI visibility using real-world examples from working businesses.
- Chris Dawkins, SEO consultant since 2002
Created with