For those that use AI/LLM's that retrain on your input, I assume you realize this commoditizes your intellectual work? And effectively makes use of it like they already used copyrighted intellectual property. This is effectively the same as the commons appropriations made for railroad development, reinterpreting fair use, etc.
I don't understand why would you opt in to share your data. Is it because you believe that it would help to improve the model and you would benefit from it? Or something altruistic?
I always assumed the folks who intentionally do this either work for the company, are associated with the company, or are in some way part of q/a pilot user group.
I think it's just general lack of awareness of the effect, or in many instances having alternate economic incentives, like academics who want to commoditize their intellectual outputs to all available distribution channels. Tyler Cowen for example. The AI companies are in a race to the bottom.
I dunno, I feel like we’ve seen this play often enough - “option to opt-out” is absolutely going to be the first feature slated for elimination on the product roadmap - “after all, only 5% of customers are using it.”
I agree with everything you’ve said, but also am happy that they’re forcing users both new and existing to make a choice to continue using Claude under the new terms, rather than silently starting to train for existing users who take no action.
Like you, I would have preferred that the UI for the choice didn’t make opt-in the default. But at least, this is one of the rare times where a US company isn’t simply assuming or circumventing consent from existing users in countries without EU-style privacy laws who ignore the advance notification. So thank you Anthropic for that form of respect.
Were they not using the data from Claude Code for training before this change? After this change, will they not train on my code if I switch this off (Claude Pro sub)?
I mean, it's great that it's at least got an opt-out, but the whole appeal for me of Anthropic and giving them money was they explicitly didn't do anything with your data, or that was the impression I had.
When you see this kind of thing it makes you wonder what else they'll try to do to get around your opt-out.
For those that use AI/LLM's that retrain on your input, I assume you realize this commoditizes your intellectual work? And effectively makes use of it like they already used copyrighted intellectual property. This is effectively the same as the commons appropriations made for railroad development, reinterpreting fair use, etc.
I don't understand why would you opt in to share your data. Is it because you believe that it would help to improve the model and you would benefit from it? Or something altruistic?
I'd assume the layman user already suffering from cookie pop-up fatigue won't pay much attention to these privacy toggles.
I always assumed the folks who intentionally do this either work for the company, are associated with the company, or are in some way part of q/a pilot user group.
I think it's just general lack of awareness of the effect, or in many instances having alternate economic incentives, like academics who want to commoditize their intellectual outputs to all available distribution channels. Tyler Cowen for example. The AI companies are in a race to the bottom.
At least in the Settings pane, the slider is kinda ambiguous as to whether you're opted in or not.
https://postimg.cc/2V7mM77C vs https://postimg.cc/1nF1HGzh
How is it that in 2025 UI is worse than what we had in Windows 98? A checkbox would be unambiguous here.
Thought so too, I assume it was checked by default, so it hit it once
I don't love that this is opt-in by default, but I'm happy that they're at least offering an opt-out.
I dunno, I feel like we’ve seen this play often enough - “option to opt-out” is absolutely going to be the first feature slated for elimination on the product roadmap - “after all, only 5% of customers are using it.”
I agree with everything you’ve said, but also am happy that they’re forcing users both new and existing to make a choice to continue using Claude under the new terms, rather than silently starting to train for existing users who take no action.
Like you, I would have preferred that the UI for the choice didn’t make opt-in the default. But at least, this is one of the rare times where a US company isn’t simply assuming or circumventing consent from existing users in countries without EU-style privacy laws who ignore the advance notification. So thank you Anthropic for that form of respect.
Were they not using the data from Claude Code for training before this change? After this change, will they not train on my code if I switch this off (Claude Pro sub)?
From their FAQ at the bottom of the linked page:
“Previous chats with no additional activity will not be used for model training.”
So, I guess they weren’t. You can switch off and keep that the case.
eek. opt-in default. 5 year retention. i knew that something like this was coming, but it's a hard pill to swallow
I mean, it's great that it's at least got an opt-out, but the whole appeal for me of Anthropic and giving them money was they explicitly didn't do anything with your data, or that was the impression I had.
When you see this kind of thing it makes you wonder what else they'll try to do to get around your opt-out.