grahamsz,

Obviously that's a tried and tested model with email, but i'm not sure there's a great way to implement that on federated servers without keeping the model fine tuning pretty secret. Any spam detection AI model that's public can simply be used to train better spam.

My past experience with this sort of thing suggests it's probably better to focus on identifying some kind of humanness score. Since kbin instances are responsible for moderating their own user population (I believe) that means they could quite easily keep a good running score of how viable an account is. Some of that could be ML that picks up on both the information content and uniqueness of a post, but you can also infer a good amount by how much interaction it gets with other users who also have good scores.

There's also some interesting stuff in the upvote structure. If you draw a directed graph of who-upvotes-who then spammers and trolls tend to form much more distinct islands than regular users do.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • KamenRider
  • TheResearchGuardian
  • KbinCafe
  • Socialism
  • oklahoma
  • SuperSentai
  • feritale
  • All magazines