unfortunately since this goes via the pict-rs, there’s no such information. Hopefully the tooling will increase in the future.
However this tool is fuzzy by necessity. Most, if not all your hits will be false positive (because CSAM is actually rare). So you will need human review for that sort of approach
It’s the primary anti-CSAM protection in the AI Horde and it’s been running for months. I’ve done enough tests to be convinced it works, but I’m no scientist of course.
I’m hoping someone will do rigorous research on my approach at some point.
In the interest of creating as little load as possible for the eventual AI Horde cluster, will there be an option to only check federated images?
That would depend on lemmy and pict-rs devs providing such classification. If it exists, I can support it.
Any plan to integrate with lemmy directly and check those as well, removing the post if triggered?
That might be more load than your worker can serve. But this is theoretically already possibly using pythorhead and parsing every incoming comment for image links, like an automoderator. You don’t need pictrs-safety for this.
PhotoDNA requires a lot more bureaucratic work than most instance admins can handle, but if you really want it, you can easily plug it into pictrs-safety instead.
However PhotoDNA will not catch novel generativeAI CSAM.
hindsight is 20-20. My point is not to jump to conclusions, nobody knows what is going on through their head. They may be really that fucking stupid, as most executive staff tend to be.