Comments

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Even_Adder, to technology in AI scam calls imitating familiar voices are a growing problem – here's how they work

TorToiSe can work off of just three ten second clips when you’re using a pre-trained model. No telling if that’ll sound any good.

Even_Adder, to technology in Reddit Sees Copyright Takedowns Peak While Subreddit Bans Drop.

Thanks.

Even_Adder, to technology in Reddit Sees Copyright Takedowns Peak While Subreddit Bans Drop.

How do I request my data?

Even_Adder, to solarpunk in How Talking With Animals Would Change Our World

Imagine their reaction when they find out who’s responsible for heating up the oceans.

Even_Adder, to lemmyshitpost in kamikazed herself

In a fit of rage, she threw her self back in preparation for a rolling tantrum without anything to catch her.

Even_Adder, to memes in switch.....

I wish there was a pure black option on Alexandrite.

Even_Adder, (edited ) to technology in [Survey] Can you tell which images are AI generated?

Base SDXL and SD1.5 with the help of controlnet can both do text too. I forgot Deep Floyd/IF can as well.

Even_Adder, to technology in [Survey] Can you tell which images are AI generated?

Non-overfitted images would still have this effect (to a lesser extent),

This is a bold claim to make with no evidence. When every trained image accounts for less than one byte of data in the model. Even the tiniest images file contain many thousands of bytes. One byte isn’t even enough to store a single character of text, most Latin-based alphabets and some symbols, use two bytes.

and this would never happen to a human.

There are plenty of artists that get stuck with same-face. Like Sam Yang for instance. Then there are the others who can’t draw disabled people or people of color. If it isn’t a beautiful white female character, they can’t do it. It can take a lot of additional training for people to break out of their rut, some don’t.

I’m not going to tell you that latent diffusion models learn like humans, but they are still learning. arxiv.org/pdf/2306.05720.pdf Have a source.

I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF if you haven’t already. The EFF is a digital rights group who most recently won a historic case: border guards in the US now need a warrant to search your phone.

This guy also does a pretty good job of explaining how latent diffusion models work, You should give this a watch too.

Even_Adder, to memes in How do y'all say GIF?
Even_Adder, to technology in [Survey] Can you tell which images are AI generated?

This paper is just about stock photos or video game art with enough dupes or variations that they didn’t get cut from the training set. The repeated images were included frequently enough to overfit. Which is something we already knew. That doesn’t really go to proving if diffusion models learn like humans or not. Not that I think they do.

Even_Adder, to technology in [Survey] Can you tell which images are AI generated?

:(

Even_Adder, to technology in [Survey] Can you tell which images are AI generated?

The butterfly was sus, but I’ve seen my fair share of horrendous horses in broadcast anime. I was tipped off, but I didn’t judge off of just that.

Even_Adder, to technology in [Survey] Can you tell which images are AI generated?

There are things you can look for. When it isn’t generated, you can spot parts where the artist got lazy. Sometimes, if the art style allows for it, you can spot simple shapes that are left over, and the lighting.

Even_Adder, to technology in [Survey] Can you tell which images are AI generated?

14/20 isn’t bad I guess.

Even_Adder, to technology in Suing Writers Seethe at OpenAI's Excuses in Court

In the US, fair use lets you use copyrighted material without permission for criticism, research, artistic expression like literature, art, music, satire, and parody. It balances the interests of copyright holders with the public’s right to access and use information. There are rights people can maintain over their work, and there are rights they do not maintain. We are allowed to analyze people’s publically published works, and that’s always been to the benefit of artistic expression. It would be awful for everyone if IP holders could take down any criticism, reverse engineering, or indexes they don’t like. That would be the dream of every corporation, bully, troll, or wannabe autocrat.

The consultation angle is interesting, but I’m not sure applies here. Consultation usually involves a direct and intentional exchange of information and expertise, whereas this is an original analysis of data that doesn’t emulate any specific intellectual property.

I also don’t think this is a new way to pirate, as long as you don’t reproduce the source material. If you wanted to do that, you could just right-click and “save as”. What this does is lower the bar for entry to let people more easily exercise their rights. Like print media vs. internet publication and TV/Radio vs. online content, there will be winners and losers, but if done right, I think this will all be in service of a more decentralized and open media landscape.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • KamenRider
  • Ask_kbincafe
  • TheResearchGuardian
  • KbinCafe
  • Socialism
  • oklahoma
  • SuperSentai
  • feritale
  • All magazines