gmtom,

I mean if I don’t know how AI works then I should probably hand back my masters degree.

It’s not a semantic argument, it’s a fact.

The simple terms explanation of how stable diffusion works is you have 2 bots. The first bot, the one that actually “genrates” the image, the only thing it knows how to do is to take a blurry image and make it less blurry.

The second bot is the one that’s trained on the data and all it can do is tell you how much a given image matches a key word.

So you generate a completely random grid of pixels. Tell bot 1 to unblur it, then bot 2 evaluates how much it resembles the prompts, then bot 1 takes that evaluation and uses it to unblur the image a bit better, bot 2 evaluates again, and so on and so on until you have an image that passes evaluation. So bot 1 has never even been trained on other peopes data.

So with that said, saying something like.

attempting to reproduce the signatures of artists they trained on

Is just nonsense it’s not attempting to reproduce anything like that. At best you can say it’s recognising the pattern of “lots of art has signatures in the bottom right corner” and so generations that put a little squiggle in tbe bottom right corner match the evaluators patterns better.

And unless you specifically set up a system to produce slightly modified versions of famous art pieces and then actively try to get slight modify versions of famous art peice then you aren’t going to get any. It’s like saying all digital art is derivative because you literally have a copy paste function you can use, while ignoring the far more common use of it to create new art pieces.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • [email protected]
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • oklahoma
  • Socialism
  • KbinCafe
  • TheResearchGuardian
  • Ask_kbincafe
  • SuperSentai
  • feritale
  • KamenRider
  • All magazines