tldr- Using the robots meta element or HTTP header to say that content of this page should not be used for machine learning, in case some actors make their search UA indistinguishable from their machine learning efforts.
Any credible large scale AI has to explain where they got their information from, typically through a known user agent. e.g. OpenAI was initially trained on common crawl.
Lots of other orgs use Common Crawl but not necessarily for AI.
The point is that mainstream, for lack of a better word, user agents will discern themselves.
Perhaps it’s important to differentiate here between known user agents and general scrapers.
Googlebot, Bingbot and any honourable UA will have a specific user agent and have a robots page telling you why they’re fetching a page. They pretty much always have a way to reverse DNS verify that their user agent is coming from a genuine IP.
wrt generally scrapers, that’s just a general issue beyond AI. That’s just scrapers scraping.
If honourable user agents can honour a site owner’s content, then a ‘noml’ tag can instruct them to not use the page for machine learning.
This is as much about protecting content IP as drawing a line in the sand, IMO. Perhaps it also protects brands from misinformation that would be presented by an AI.
Yes, people will continue to steal content, this has happened since the start of the web, there is a distinction here about not using content to train AI models that’ll steal clicks from content creators.
On small scales perhaps not, but as said this has always been the case with scraping.
Unless some law is enacted,
robots.txt protocols has never been law but has been honoured so it’s worth hanging on to. It’s still the defintion of ‘good bots’ vs ‘bad bots’ on one level and that’s about as good as site owners have vs whack-a-mole with UA-IP variations.
Prevent Big Tech Scraping Content for Machine Learning via Robots Spec (noml.info)
tldr- Using the robots meta element or HTTP header to say that content of this page should not be used for machine learning, in case some actors make their search UA indistinguishable from their machine learning efforts.