Lemmyshitpost community closed until further notice

Hello everyone,

We unfortunately have to close the !lemmyshitpost community for the time being. We have been fighting the CSAM (Child Sexual Assault Material) posts all day but there is nothing we can do because they will just post from another instance since we changed our registration policy.

We keep working on a solution, we have a few things in the works but that won’t help us now.

Thank you for your understanding and apologies to our users, moderators and admins of other instances who had to deal with this.

Edit: @Striker the moderator of the affected community made a post apologizing for what happened. But this could not be stopped even with 10 moderators. And if it wasn’t his community it would have been another one. And it is clear this could happen on any instance.

But we will not give up. We are lucky to have a very dedicated team and we can hopefully make an announcement about what’s next very soon.

Edit 2: removed that bit about the moderator tools. That came out a bit harsher than how we meant it. It’s been a long day and having to deal with this kind of stuff got some of us a bit salty to say the least. Remember we also had to deal with people posting scat not too long ago so this isn’t the first time we felt helpless. Anyway, I hope we can announce something more positive soon.

falsem,

Looks like Google has some tooling available that might help: https://protectingchildren.google/tools-for-partners

Probably other options too.

Ab_intra,
@Ab_intra@lemmy.world avatar

Yeah. The thing is that this should implemented on a lemmy core basis.

snowe,
@snowe@programming.dev avatar

It really shouldn’t. There are plenty of existing tools already created for this, and giving anyone who runs a lemmy server access to the CSAM hashes is a terrible terrible terrible idea. Use CF or another existing solution to stop CSAM. Do not put that into the lemmy code.

Ab_intra,
@Ab_intra@lemmy.world avatar

Can you elaborate on why this is a terrible idea?

snowe,
@snowe@programming.dev avatar

I have detailed why in plenty of my other comments. Here is one. programming.dev/comment/2426720

dragontamer,

Not that I’m familiar with Rust at all, but… perhaps we need to talk about this.

The only thing that could have prevented this is better moderation tools. And while a lot of the instance admins have been asking for this, it doesn’t seem to be on the developers roadmap for the time being. There are just two full-time developers on this project and they seem to have other priorities. No offense to them but it doesn’t inspire much faith for the future of Lemmy.

Lets be productive. What exactly are the moderation features needed, and what would be easiest to implement into the Lemmy source code? Are you talking about a mass-ban of users from specific instances? A ban of new accounts from instances? Like, what moderation tool exactly is needed here?

TsarVul,
@TsarVul@lemmy.world avatar

I guess it’d be a matter of incorporating something that hashes whatever it is that’s being uploaded. One takes that hash and checks it against a database of known CSAM. If match, stop upload, ban user and complain to closest officer of the law. Reddit uses PhotoDNA and CSAI-Match. This is not a simple task.

Touching_Grass,

Couldn’t one small change in the picture change the whole hash?

TsarVul,
@TsarVul@lemmy.world avatar

Good question. Yes. Also artefacts from compression can fuck it up. However hash comparison returns percentage of match. If match is good enough, it is CSAM. Davai ban. There is bigger issue however for developers of Lemmy, I assume. It is a philosophical pizdec. It is that if we elect to use PhotoDNA and CSAI Match, Lemmy is now at the whims of Microsoft and Google respectively.

what_is_a_name,

Mod tools are not Lemmy. Give admins and mods an option. Even a paid one. Hell. Admins of Lemmy.world could have us donate extra to cover costs of api services.

TsarVul,
@TsarVul@lemmy.world avatar

I agree. Perhaps what Lemmy developers can do is they can put slot for generic middleware before whatever the POST request is in Lemmy API for uploading content? This way, owner of instance can choose to put whatever middleware for CSAM they want. This way, we are not dependent on developers of Lemmy for solution to pedo problem.

Lmaydev,

Honestly I’d rather that than see shit like this any day.

Serinus,

The bigger thing is that hash detection tools don’t want to give access to just anyone, and just anyone can run a Lemmy instance. The concern is that you’re effectively giving the CSAM people a way to know if they’ll be detected.

Perhaps they can allow some of the biggest Lemmy instances to use the tech, but I wouldn’t expect it to be available to everyone.

shagie,

Facebook and Reddit don’t have local CSAM detection but rather use Google’s APIs.

This isn’t something that any average user can get access to. Even the largest Lemmy instances are small compared to Reddit and Facebook… and they don’t have local testing either.

Part of this is also a “this isn’t just detecting and blocking but also automated reporting”.

Furthermore, Lemmy is AGPL, and providing a Lemmy instance with an implementation that would run the risk that it wouldn’t be able to remain closed source (AGPL license violation).

Nollij,

If they hash the file binary data, like CRC32 or SHA, yes. But there are other hash types out there, which are more like “fingerprints” of an image. Think of how Shazam or Sound Hound can recognize a song playing, despite the extra wind, static, etc that’s present. There are similar algorithms for images/videos.

No idea how difficult those are to implement, though.

Railcar8095,

There are FOSS applications that can do that (czkawka for example). What I’m not sure it’s if the specific algorithm used is available and, more importantly, if the csam hashes are available for general audiences. I would assume if they are any attacker could check first and get the right amount of changes.

reverendsteveii,

One bit, in fact. Luckily there are other ways of comparing images without actually showing them to human eyes that allow you to calculate a percentage of similarity.

diffuselight,

None of that really works anymore in the age of AI inpainting. Hashes / Perceptual worked well before but the people doing this are specifically interested in causing destruction and chaos with this content. they don’t need it to be authentic to do that.

It’s a problem that requires AI on the defensive side but even that is just going to be eternal arms race. This problem cannot be solved with technology, only mitigated.

The ability to exchange hashes on moderation actions against content may offer a way out, but it will change the decentralized nature of everything - basically bringing us back to the early days of the usenet, Usenet Death Penaty, etc.

dragontamer,

Not true.

A simple CAPTCHA got rid of a huge set of idiotic script-kiddies. CSAM being what it is, could (and should) result in an immediate IP ban. So if you’re “dumb” enough to try to upload a well-known CSAM hash, then you absolutely deserve the harshest immediate ban automatically.


You’re pretty much like the story of the economist who refuses to believe that $20 exists on a sidewalk. “Oh, but if that $20 really existed on the sidewalk there, then it would have been arbitraged away already”. Well guess what? Human nature ain’t economic theory. Human nature ain’t cybersecurity.

Idiots will do dumb, easy attacks because they’re dumb and easy. We need to defend against the dumb-and-easy attacks, before spending more time working on the harder, rarer attacks.

AustralianSimon,
@AustralianSimon@lemmy.world avatar

You don’t get their ip when they post from other instances. I’m surprised this hasn’t resulted in defed.

anlumo,

Well, my home instance has defederated from lemmy.world due to this, that’s why I had to create a local account here.

AustralianSimon,
@AustralianSimon@lemmy.world avatar

I mean defedding the instances the CSAM is coming from but also yes.

rolaulten,

I’m sorry but you don’t want to use permanent IP bans. Most residential circuits are DHCP meaning banning via IP only has a short term positive effect.

That said automatic scanning of known hashes, and automatically reporting to relevant authorities with relevant details should be doable (provided there is a database somewhere - I honestly have never looked).

MrPoopyButthole,
@MrPoopyButthole@lemmy.world avatar

I think it would be an AI autoscan that flags some posts for mod approval before they show up to the public and perhaps more fine-grained controls for how media is posted like for instance only allowing certain image hosting sites and no directly uploaded images.

Agamemnon,
@Agamemnon@lemmy.world avatar

Speculating:

Restricting posting from accounts that don’t meet some adjustable criteria. Like account age, comment count, prior moderation action, average comment length (upvote quota maybe not, because not all instances use it)

Automatic hash comparison of uploaded images with database of registered illegal content.

dragontamer,

On various old-school forums, there’s a simple (and automated) system of trust that progresses from new users (who might be spam)… where every new user might need a manual “approve post” before it shows up. (And this existed in Reddit in some communities too).

And then full powers granted to the user eventually (or in the case of StackOverlow, automated access to the moderator queue).

MossyFeathers,

What are the chances of a hash collision in this instance? I know accidental hash collisions are usually super rare, but with enough people it’d probably still happen every now and then, especially if the system is designed to detect images similar to the original illegal image (to catch any minor edits).

Is there a way to use multiple hashes from different sources to help reduce collisions? For an example, checking both the MD5 and SHA256 hashes instead of just one or the other, and then it only gets flagged if both match within a certain degree.

TsarVul,
@TsarVul@lemmy.world avatar

Traditional hash like MD5 and SHA256 are not locality-sensitive. Can’t be used to detect match with certain degree. Otherwise, yes you are correct. Perceptual hashes can create false positive. Very unlikely, but yes it is possible. This is not a problem with perfect solution. Extraordinary edge cases must be resolved on a case by case basis.

And yes, simplest solution must be implemented first always. Tracking post reputation, captcha before post, wait for account to mature before can post, etc. The problem is that right now the only defense we have access to are mods. Mods are people, usually with eyeballs. Eyeballs which will be poisoned by CSAM so we can post memes and funnies without issues. This is not fair to them. We must do all we can, and if all we can includes perceptual hashing, we have moral obligation to do so.

MossyFeathers, (edited )

Something I thought about that might be helpful is if mods had the ability to add a post delay on a community basis. Basically, the delay would be moderator adjustable, but only moderators and admins would be able to see the post for X number of minutes after being posted. It’d help for situations like ongoing attacks where you don’t necessarily want to have to manually approve posts, but you want a chance to catch any garbage before the post goes public.

Edit: and yeah, one of the reasons I’m aware that perceptual hashes can have collisions is because a number of image viewers/cataloging tools like xnview mp or hydrus network use hash collisions to help identify duplicate images. However, I’ve seen collisions between unrelated images when lowering the sensitivity which is why I was wondering if there was a way to use multiple hashing algorithms to help reduce false positives without sacrificing the usefulness of it.

Natanael,

Or just making posts approval only with a mod queue

fkn,

I’m surprised this isn’t linked, there are services that does this for you.

And they are free.

blog.cloudflare.com/the-csam-scanning-tool/

MsPenguinette,

I beleive there are several readily available databases of hashes of csam material for exactly this kind of scanning. Looks like there are some open source ones.

Some top results: github.com/topics/csam

This looks to be the top project: prostasia.org/project/csam-scanning-plugins/

BuddyTheBeefalo,

Could they not just change one pixel to get another hash?

gabe,

I think having a means of viewing uploaded images as an admin would be helpful, as well disabling external image caching. Like an “uploaded” gallery for admins to view that can potentially hook into Photodna/CSAI-Match or whatever.

BURN,

Probably hashing and scanning any uploaded media against some of the known DBs of CSAM hashes.

Iirc that’s how Reddit/FB/Insta/Etc. handle it

shagie,

They’re sent to a 3rd party that does the checks. For example developers.cloudflare.com/cache/…/csam-scanning/

The actual DB of hashes isn’t released to the public as it would enable those who traffic in such content to use it to find the material that doesn’t match much more easily.

protectingchildren.google/#tools-to-fight-csam

That appears to be the one that Facebook and Reddit use.

CoderKat,

The sad thing is that all we can usually do is make it harder for attackers. Which is absolutely still worth doing, to be clear. But if an attacker wants to cause trouble badly enough, there’s always ways around everything. Eg, image detection can be foiled with enough transformation, account age limits can be gotten past by a patient attacker. Minimum karma can be botted (even easier than ever with AI) and Lemmy is especially easy to bot karma because you can just spin up an instance with all the bots your heart desires. If posts have to be approved, attackers can even just hotlink to innocent images and then change the image after it’s approved.

Law enforcement can do a lot more than we can, by subpoenaing ISPs or VPNs. But law enforcement is slow and unreliable, so that’s also imperfect.

Serinus,

The best feature the current Lemmy devs could work on is making the process to onboard new devs smoother. We shouldn’t expect anything more than that for the near future.

I haven’t actually tried cloning and compiling, so if anyone has comments here they’re more than welcome.

AustralianSimon,
@AustralianSimon@lemmy.world avatar

Reddit had automod which was highly configurable.

over_clox,

Reddit automod is also a source for all the porn communities. Have you ever checked automod comment history?

Yeah, I have. Like 2/3 of automod comments are in porn communities.

reddit.com/…/a_dump_of_random_subreddits_from_aut…

AustralianSimon,
@AustralianSimon@lemmy.world avatar

What? Reddit automod is not a source for porn. What would be happening is the large quantity of content it reacts to there.

It literally reads your config in your wiki and performs actions based on that. The porn communities using it are using it to moderate their subs. You can look at the post history. www.reddit.com/user/AutoModerator It is commenting on posts IN those communities as a reaction to triggers but isn’t posting porn (unless they put in their config)

Not worth it if you don’t moderate on reddit but read the how to docs for reddit automod, it is an excellent tool for spam management and the source is open prior to reddit acquiring it and making it shit. www.reddit.com/wiki/…/full-documentation

over_clox,

No shit, ya don’t say?

Where the hell you think I got that list from? I literally filtered every single subreddit that AutoModerator replied in for like three months.

Bruh you’re preaching to the person that accumulated the data. That’s the data it puked up. I can’t help it that most of them happen to be filth communities.

AustralianSimon, (edited )
@AustralianSimon@lemmy.world avatar

So you should understand that what you said is invalid. Automod doesn’t post porn without a subreddit owner configuring it to and just because it posts 2/3 to NSFW subs doesn’t mean it is posting content just working more there.

We could 100% take advantage of a similar tool, maybe we some better controls on what mods can make it do. I’m working to bring BotDefence to Lemmy because it is needed.

over_clox,

You completely missed the point.

By the statistics of the data I found, most of the subreddits using AutoModerator are filth communities.

So you can reverse that, check AutoModerator comment history, and find a treasure trove of filth.

I can’t help that these are the facts I dug up, but yeah AutoModerator is most active in porn communities.

AustralianSimon,
@AustralianSimon@lemmy.world avatar

Too stupid to argue with. You don’t even understand your own “data”.

over_clox,

No no, I am well aware it’s a bot account which is programmed by moderators to filter out certain things and perform other automated tasks.

It just so happens that many of the communities that AutoModerator has to take action in are filth communities.

snowe,
@snowe@programming.dev avatar

That statement is just outright wrong though. They could easily use CloudFlares CSAM monitoring and it never would have been a problem. A lot of people in these threads, including admins, have absolutely no idea what they’re talking about.

sunaurus,
@sunaurus@lemm.ee avatar

Cloudflare CSAM protection is not available outside of the US, unfortunately.

snowe,
@snowe@programming.dev avatar

There are several other solutions including ones from Microsoft and Facebook.

SubArcticTundra,

I was just discussing this under another post and turns out that the Germans have already developed a rule-based auto moderator that they use on their instance:

github.com/Dakkaron/SquareModBot

This could be adopted by lemmy.world by simply modifying the config file

CreeperODeath,

Yeah I don’t blame you for being frustrated

Iceblade02,

Wow, this is awful. Huge cudos to y’all for holding on through this. It’s obviously a deliberate attack on the fediverse by malicious actors.

HexesofVexes,

Sounds like the 4chan raids of old.

Batten down, report the offender’s to the authorities, and then clean up the mess!

Good job so far ^_^

johnthebeboptist,

Yeah, in good and in bad, this place is a lot like “the old days of the internet”, but thankfully more in the good way than the bad. People keep complaining about shit I haven’t seen for a second, constantly there are actions from the mod/admin side about shit I’ve never even seen etc. Even without proper mod tools and shit it seems like everything is working out quite well. To which I and all of us owe a huge thank you to the people working on this stuff.

Thank you people! Thank you for making this place feel like home and something greater than what we’ve had for a long ass time.

HelloHotel,
@HelloHotel@lemmy.world avatar

Mods, Keep yourself safe, be open to your support groups (dont isolate), clear shread your cache often

We will get thru this together

BoldRaven,

They can out post us. They have nothing better to do… 😬

ZestycloseReception8,

isn’t this what 8cjan is for? Seriously what the fuck is wrong with people who think that CSAM is appropriate shit post material. The Internet really is a fucked up place. is there a way to setup something to automatically remove inappropriate posts similar to YouTube’s system of automatically removing inappropriate contents?

HelloHotel,
@HelloHotel@lemmy.world avatar

Its an attavk on the fediverse

BoldRaven,

thanks, soldier 😆

ArtisinalBS,

Legally , in the US , you would have to remove it from public view, keep it stored on the server, collect identifying information about the uploader, and send all that data to the authorities - all that while maintaining a low false-positive rate.
Kind of a tall order.

middlemanSI,

I know no details but smell Elon stink…

BoldRaven,

it tis a somewhat identifiable musk indeed… i like to say it’s that Pepé Le Pew

scarabic,

Thank you.

PugJesus,

Genuine question: won’t they just move to spamming CSAM in other communities?

givesomefucks,

With how slow Lemmy moves anyways, it wouldn’t be hard to make everything “mod approved” if it’s a picture/video.

Blaze,
@Blaze@discuss.tchncs.de avatar

This, or blocking self hosting pictures

ButtholeSpiders,
@ButtholeSpiders@startrek.website avatar

Honestly, this sounds like the best start until they develop better moderation tools.

Anonymousllama,

This seems like the better approach. Let other sites who theoretically have image detection in place sort this out. We can just link to images hosted elsewhere

user224,
@user224@lemmy.sdf.org avatar

I generally use imgur anyway because I don’t like loading my home instance with storage + bandwidth. Imgur is simply made for it.

devious,

Yes, and only whitelist trusted image hosting services (that is ones that have the resources to deal with any illegal material).

Darth_vader__,
@Darth_vader__@discuss.online avatar

the problem is those sites can also misuse the same tools in a way that harms the privacy of it’s users. We shouldn’t resort to “hacks” to fix real problems, like using client scanning to break E2EE . One solution might be an open sourced and community maintained auto mod bot…

assassin_aragorn,

This seems like a really good solution for the time being.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

How many mods do you think will want to delete pictures of child abuse all day long? Nornal users won’t be affected but the mods will just leave.

Besides that, images and videos can be embedded into comments and DMs as well. Lemmy needs a better moderation system to fight these trolls.

droans,

Not-so-fun fact - the FBI has a hard limit on how long an individual agent can spend on CSAM related work. Any agent that does so is mandated to go to therapy afterwards.

It’s not an easy task at all and does emotionally destroy you. There’s a reason why you can find dozens of different tools to automate the detection and reporting.

skullgiver,
@skullgiver@popplesburger.hilciferous.nl avatar

Exactly. You can only see so many pictures of babies getting raped before you start going crazy.

Unfortunately, there is no free and open source CSAM hash database. Companies like Microsoft provide free APIs, but the reliable services are either centralised by big tech companies or used exclusively by law enforcement. There is scanning technology that is open source, but without a database of verified hashes that technology is rather useless.

CharlesDarwin,
@CharlesDarwin@lemmy.world avatar

Yep. I know someone that does related work for a living, and there are definite time limits and so on for exactly the reasons you say. This kind of stuff leaves a mark on normal people.

SubArcticTundra,

Or it could even just ask 50 random instance users to approve it. To escape this, >50% of accounts would have to be bots, which is unlikely.

XylightNotAdmin,

But then people would have to see the horrible content first

SubArcticTundra,

That definitely is a downside

Blaze,
@Blaze@discuss.tchncs.de avatar

Thank you for your work.

That also means that they could post to other communities, so I guess moderators need to be vigilant

aibler,

Is there an app or anything that can make a notification if anyone posts anything to a community I mod? Am I able to turn of image posts?

Blaze,
@Blaze@discuss.tchncs.de avatar

For image posts, ask your instance admins (as you are on LW, they are probably working on something)

For notification, not that I know off, but I guess it might be possible using RSS

aibler,

Thanks so much! I appreciate the help!

godless,
@godless@lemmy.world avatar

Fucking bastards. I don’t even know what beef they have with the community and why, but using THAT method to get them to shut down is nothing short of despicable. What absolute scum.

aceshigh,
@aceshigh@lemmy.world avatar

jealousy is a powerful emotion.

MajorHavoc,

I would not be shocked to learn this was being organized or funded by one or several of the major stockholders of current popular social media sites. (Speaking of fucking bastards.)

krayj,

How does closing lemmyshitpost do anything to solve the issue? Isn’t it a foregone conclusion that the offenders would just start targeting other communities or was there something unique about lemmyshitpost that made it more susceptible?

Whitehat93875,
@Whitehat93875@lemmy.world avatar

They also changed the account sign ups to be application only so people can’t create accounts without being approved.

Cabrio,

It stops their instance hosting CSAM and removes their legal liability to deal with something they don’t have the capacity to at this point in time.

How would you respond to having someone else forcibly load up your pc with child porn over the Internet? Would you take it offline?

krayj, (edited )

How would you respond to having someone else forcibly load up your pc with child porn over the Internet? Would you take it offline?

But that’s not what happened. They didn’t take the server offline. They banned a community. If some remote person had access to my pc and they were loading it up with child porn, I would not expect that deleting the folder would fix the problem. So I don’t understand what your analogy is trying to accomplish because it’s faulty.

Also, I think you are confusing my question as some kind of disapproval. It isn’t. If closing a community solves the problem then I fully support the admin team actions.

I’m just questioning whether that really solves the problem or not. It was a community created on Lemmy.world, not some other instance. So if the perpetrators were capable of posting to it, they are capable of posting to any community on lemmy.world. You get that, yeah?

My question is just a request for clarification. How does shutting down 1 community stop the perpetrators from posting the same stuff to other communities?

Cabrio,

It’s not meant to solve the problem, it’s meant to limit liability.

krayj,

How does it limit liability when they could continue posting that content to any/every other community on lemmy.world?

Cabrio, (edited )

But it does remove the immediate issue of CSAM coming from shitpost so world isn’t hosting that content.

Double_A,
@Double_A@discuss.tchncs.de avatar

Shitpost is not the only community on World Ffs!

stealthnerd,

They’re taking a whack-a-mole approach for sure but it’s either that or shut the whole instance down. I imagine their hope is that either the bad guys give up/lose interest or that it buys them some time.

Either way, it shows they are taking action which ultimately should help limit their liability.

Ghostalmedia,
@Ghostalmedia@lemmy.world avatar

Fact of the matter is that these mods are not lawyers, and even if they were not liable, they would not have the means to fight this in court if someone falsely, or legitimately, claimed they were liable. They’re hobbits with day jobs.

I also mod a few large communities here, and if I’m ever in that boat, I would also jump. I have other shit to do, and I don’t have the time or energy to fight trolls like that.

If this was Reddit, I’d let all the paid admins, legal, PR, SysOps, engineers and UX folks figure it out. But this isn’t Reddit. It’s all on the hobbyist mods to figure it out. Many are not going to have the energy to put up with it.

Ghostalmedia,
@Ghostalmedia@lemmy.world avatar

It doesn’t solve the bigger moderation problem, but it solves the immediate issue for the mods who don’t want to go to jail for modding a community hosting CSM.

krayj,

Doesn’t that send a clear message to the perpetrators that they can cause any community to be shut down and killed and all they have to do is post CSAM to it? What makes you or anyone else think that, upon seeing that lemmyshitpost is gone, that the perpetrators will all just quit. Was lemmyshitpost the only community they were able to post in?

Ghostalmedia,
@Ghostalmedia@lemmy.world avatar

Yup. The perpetrators win.

If you were in their shoes, would you want to risk going to jail for kiddy porn, risk having your name associated with CSM online, or drain your personal savings account to fight these folks?

These mods are not protected by a well funded private legal team. This isn’t Reddit.

krayj,

You don’t have to explain how liability works. I get it. What I don’t get is how removing that specific community is going to limit their liability when the perpetrators will just target a different community.

Whitehat93875,
@Whitehat93875@lemmy.world avatar

Sign-ups are manual approval applications, no more automated sign-ups from them, if they have existing accounts and target another community it’ll be closed as well and those accounts banned, there isn’t a stream of new accounts though because all accounts going forward need to be manually approved.

ttmrichter,
@ttmrichter@lemmy.world avatar

One of the ways you avoid liability is you show that you’re actively taking measures to prevent illegal content.

MsPenguinette,

The perps are taking a big risk as well. Finding and uploading csam means being in possession of it. So we can at least take solace in knowing it’s not a tool that just anyone wiill use to take down a community.

Uploading to websites counts as distribution. The authorities will actually care about this. It’s not just some small thing that is technically a crime. It’s big time crime being used for skme thing petty.

So while the perp might win in the short term, they are risking their lives using this tactic. I’m not terribly worried about it becoming a common tactic

I’d anything, if I were the one doing this, I’d be worried that I might be pissing off the wrong group of people. If they keep at it and become a bigger problem, everyone is going to be looking for them. And then that person is going to big boy prison.

krayj,

That is a great point. I don’t know if the admin team are proactively reporting that activity to law enforcement, but I hope they are.

bemenaker,

I assume you’ve contacted the FBI, but if not PLEASE DO.

emergencyfood,

lemmy.world is based in Finland.

bemenaker,

Didn’t realize that. Thanks.

raptir,

Yes, the Finnish Bureau of Investigation

recapitated,

They’re the ones that can finish the job.

Dogs_cant_look_up,

The Finnish Bureau of Icecoldmotherfuckers

Bluetreefrog,

Gold!

SubArcticTundra,

In that case probably Europol or the Finnish police/cybersecurity agency or something?

El_illuminacho,

It’s always fun to see Americans assume the Internett is all American as well.

Moghul,

Threads on this issue have been posted both on Reddit and on Lemmy before, and despite Americans protesting that they don’t mean it or they don’t think that way, some people do actually say that in the comments. They assume everyone is American because “the internet is American” and “the website is American”.

What the fuck is “the bay area”?

ObviouslyNotBanana,
@ObviouslyNotBanana@lemmy.world avatar

What the fuck is “the bay area”?

I assume that’s where the pirates are.

despotic_machine,
@despotic_machine@lemmy.world avatar

And the real estate sharks!

bemenaker,

What the fuck is “the bay area”?

The San Francisco Bay area, aka Silicone Valley, aka the BIRTHPLACE of computers. DARPA (US Military), created the internet, but when it was opened to public, again it was “The Bay Area” that made it usable to the world.

It has a little bit of importance to computing.

Moghul,

The point wasn’t that it’s not important, it’s that it’s so generic in name as to mean nothing to many outside of the US. The reason I mentioned it is because I have had to Google which bay area Americans refer to.

Consider me an alien confused about which moon you’re talking about when you say ‘the moon’

bemenaker,

Fair point. Yes Americans are ego centric in most cases. In my original comment that got everyone panties in bunch, was because most.of the tech companies and web hosting sites are in the US, or based here, giving US law enforcement jurisdiction. I didn’t know where lemmy.world.was based until someone in this thread told me.

obinice,
@obinice@lemmy.world avatar

Most people in the world don’t live in that one very specific country, it’s somewhat presumptuous to assume they do, and I’m a bit tired of those assumptions only ever coming from people from a very specific place.

I put up with it for a long while without saying anything, but it’s been 15 years and they’re doing it more now than ever. I’m kinda… worn down now.

Can’t we all just accept that the world is vast and amazing, and exists beyond our own borders? :-(

qjkxbmwvz,

It’s not a crazy leap of faith to assume Lemmy demographics are similar to reddit demographics, where the USA makes up the plurality by a huge margin statista.com/…/reddit-global-active-user-distribu…

Fedizen,

a lot of the fediverse is rooted in the EU so I’m not entirely sure the demographics are 1:1 with reddit.

Dark_Arc,
@Dark_Arc@social.packetloss.gg avatar

Can’t we all just accept that the world is vast and amazing, and exists beyond our own borders? :-(

So can I count on you to make sure every comment you make on social media doesn’t incorrectly reference the wrong country, culture, or generally get something wrong?

Y’all act like Americans just get up in the morning and say “to hell with it, everywhere is 'Merica.” The person just tried to help; hell they might not even be an American and just assumed (at least) one of the site admins is.

it’s been 15 years and they’re doing it more now than ever

The pure unnecessary drama over what’s easily a simple mistake is ridiculous. I can’t even comprehend why you even care? And beyond that, why not just tell this person “well actually, it’s X they should call because Y”?

Like this good egg dig … sh.itjust.works/comment/2716913

Redredme,

Yes. Everywhere I go Americans (and us Dutch) act like the entire world is theirs. (but I’m American! Or in case of us Dutch: ik zit hier en ik moet bier. En die bbq gaat niet uit. En techno moet hard.)

I more or less concur with the other guy. He said it in a shitty way though.

Problem is, there is no “world police”. ('obligatory: ’ MURICA FUCK YEAH!)

Calling the FBI, Essex police department, Clouseau’s Surete or even interpol will do jack shit in 99% of these cases because of the simple fact they don’t have any jurisdiction. Add to that they couldn’t care less because narcotics and gang violence are bigger problems and you end up with reality: it will only cost time and solve nothing. Why? Because IT crimes are hard and often times pricey to investigate with very little returns. (yeeeh, we arrested a 14 year old pimply faced youth, a minor so he will be out in 2 days with a slap on the wrist)

And finally: The FBI does not have global reach. This is a global platform.

tsz,

Wat betekent moet bier of moet hard?

Aceticon, (edited )

In this context:

“ik moet bier” - I need beer/I must have beer (literally “I must beer”)

“techno moet hard” - Techno (music) must (go) hard.

I vaguelly recognize this whole thing as a dutch saying. (Not dutch myself)

The pleasant wordplay in that saying is because “moet” there is used with two slightly different meanings, as “having need” and as “must”.

tsz,

Yeah I’ve just not seen moet alone like that. I like it lol.

Aceticon,

Well, the first form is a pretty common variant for human basic needs if you want to make it sound desperate in a funny way (for example, “ik moet voed” rather than “ik moet eten”) but the second whilst making sense is not common and was probably chosen for literally reasons (whilst not 100% sure, I think these might be lyrics from a dutch song).

Dark_Arc,
@Dark_Arc@social.packetloss.gg avatar

And finally: The FBI does not have global reach. This is a global platform.

www.fbi.gov/contact-us/international-offices

wanderingmagus,

The FBI work hand in hand with the CIA, who can and do set up black sites all over the world for enhanced interrogation and extrajudicial detention, and work extremely closely with the Five Eyes and NATO. I’d call that global reach.

bemenaker,

The FBI does work with Interpol, and every national police agency the US is friendly with.

redexplosives,

It’s not that deep

bemenaker,

And most of the big tech companies are based where? And most of the big hosting services are based where? So, assuming contacting the US authorities makes sense I don’t know where lemmy.world is doing their hosting, but since most of the big providers are based in the US, and that would make it US jurisdiction, it isn’t an unreasonable assumption.

ObviouslyNotBanana, (edited )
@ObviouslyNotBanana@lemmy.world avatar

They* should contact Europol.

cactusupyourbutt,

they should contact the local authorities thats the easiest probably

ObviouslyNotBanana,
@ObviouslyNotBanana@lemmy.world avatar

Might be.

newIdentity,

They probably haven’t. Also they won’t do anything as long as you aren’t a large cooperation. Even if you are they might not do anything. See the Epic Games hack in the early 2000s

Severed_Fate,

What’s CSAM

REdOG,
@REdOG@lemmy.world avatar

Child sexual assault/abuse material

OhStopYellingAtMe,
@OhStopYellingAtMe@lemmy.world avatar

Child sex abuse materials

FontMasterFlex,

Honestly why can’t we just say what it is instead of giving every-fucking-thing an anagram that people have to ask what it is in the first place.

jtablerd,

*acronym

PickTheStick,

*Initialism. Acronyms have to be words, like ACORN or RICE.

jtablerd,

Thank you, sir or madam, I stand corrected.

(excellently done, sincere thanks)

SgtAStrawberry,

I think it falls into the category of, given bad thins new lesser bad words so it doesn’t sound so bad anymore. Because the old word made people fell bad. I have noticed it more and more online especially with stuff that is related to sex, it is ether new words or replace the middle with *.

wildbus8979,

You’ve got it backwards, it’s completely the opposite. The old terminology associated it with something that is perhaps perverted but totally legal and done by consenting adults. CSAM makes it 100% clear that this is illegal behavior/activities that creates child victims.

FontMasterFlex,

idk why you’re being downvoted. no one else seemed to want to reply. just downvote and move on. seems like reddit lives on in lemmy afterall.

That being said, this generation of people is soft. they can’t even say the words.

SgtAStrawberry,

Could not have said it better myself.

LinkOpensChest_wav,
@LinkOpensChest_wav@midwest.social avatar

This is what it is, though. The entire point of switching to the term “CSAM” is because it more accurately describes the content as abusive.

candybrie,

I think they’re asking people to type out child sex abuse materials instead of CSAM. It’s not a common enough acronym that people know what it means without explanation. As evidenced by every post using CSAM having someone ask what it means.

LinkOpensChest_wav,
@LinkOpensChest_wav@midwest.social avatar

There are people in every thread who ask, correct, but it’s pretty widely known by now. It’s no different than people previously having to ask what “CP” meant. Obviously, it’s not a problem and is generally considered to be worth the imperceptible amount of educating people to transition to the new term.

Personally, I don’t find it annoying to have to explain things, so feel free to ignore the questions and leave it to people like me who don’t struggle with the keyboard. What does it take? Seconds to type out the definition.

MonkderZweite,

I know the cp command but CP? Copy Protection? That’s DRM.

candybrie,

I think when transitioning, it’s a good idea to spell the whole phrase out for your first use and then put the acronym in parentheses. And I’m pretty sure the complaint wasn’t needing to explain stuff but needing to ask and then have someone explain. It’s better if communication is clear from the beginning.

Nindelofocho,

I think most platforms will block “CP” too and also that kinda fucks up talking about Cyberpunk

Nollij,

It’s also weird to see so many Mario levels refer to CP (Check Point)

atempuser23,

Bless you for not knowing. It’s the more accurate description of what was called child porn. It stands for child sexual abuse material.

STRIKINGdebate2,
@STRIKINGdebate2@lemmy.world avatar

I would like to extend my sincerest apologies to all of the users here who liked lemmy shit posting. I feel like I let the situation grow too out of control before getting help. Don’t worry I am not quitting. I fully intend on staying around. The other two deserted the community but I won’t. Dm me If you wish to apply for mod.

Sincerest thanks to the admin team for dealing with this situation. I wish I linked in with you all earlier.

lwadmin,
@lwadmin@lemmy.world avatar

@Striker this is not your fault. You stepped up when we asked you to and actively reached out for help getting the community moderated. But even with extra moderators this can not be stopped. Lemmy needs better moderation tools.

rob_t_firefly,
@rob_t_firefly@lemmy.world avatar

Hopefully the devs will take the lesson from this incident and put some better tools together.

Whitehat93875,
@Whitehat93875@lemmy.world avatar

There’s a Matrix Room for building mod tools here maybe we might want to bring up this issue there, just in case they aren’t already aware.

WhiskyTangoFoxtrot,

Or we’ll finally accept that the core Lemmy devs aren’t capable of producing a functioning piece of software and fork it.

Bread,

Its not easy to build a social media app, forking it won’t make it any easier to solve this particular problem. Joining forces to tackle an inevitable problem is the only solution. The Lemmy devs are more than willing to accept pull requests for software improvements.

x1gma,

And who’s gonna maintain the fork? Even less developers from a split community? You have absolutely no idea what you’re talking about.

dandroid,

This isn’t your fault. Thank you for all you have done in regards to this situation thus far.

gabe,

Please, please, please do not blame yourself for this. This is not your fault. You did what you were supposed to do as a mod and stepped up and asked for help when you needed to, lemmy just needs better tools. Please take care of yourself.

Blaze,
@Blaze@discuss.tchncs.de avatar

It’s not your fault, thank you for your job!

Mr_Blott,

Definitely not your fault mate, you did what anyone would do, it’s a new community and shit happens

rob_t_firefly,
@rob_t_firefly@lemmy.world avatar

Thanks for your work. The community was appreciated.

Thief,

Thank you for your help. It is appreciated.

Touching_Grass,

Really feel for you having to deal with this.

Nerd02,
@Nerd02@lemmy.basedcount.com avatar

You don’t have to apologize for having done your job. You did everything right and we appreciate it a lot. I’ve spent the whole day trying to remove this shit from my own instance and understanding how purges, removals and pictrs work. I feel you, my man. The only ones at fault here are the sickos who shared that stuff, you keep holding on.

reverendsteveii,

You didn’t do anything wrong, this isn’t your fault and we’re grateful for the effort. These monsters will be slain, and we will get our community back.

Draconic_NEO,
@Draconic_NEO@lemmy.world avatar

It’s not your fault, these people attacked and we don’t have the proper moderation tools to defend ourselves yet. Hopefully in the future this will change though. As it stands you did the best that you could.

FlyingSquid,
@FlyingSquid@lemmy.world avatar

I love your community and I know it is hard for you to handle this but it isn’t your fault! I hope no one here blames you because it’s 100% the fault of these sick freaks posting CSAM.

GONADS125,

You’ve already had to take all that on, don’t add self-blame on top of it. This wasn’t your fault and no reasonable person would blame you. I really feel for what you and the admins have had to endure.

Don’t hesitate to reach out to supports or speak to a mental health professional if you’ve picked up trauma from the shit you’ve had to see. There’s no shame in getting help.

Becoming,
@Becoming@lemmy.world avatar

As so many others have said, there’s no need for an apology. Thank you for all of the work that you have been doing!

The fact that you are staying on as mod speaks to your character and commitment to the community.

MsPenguinette,

You do a great job. I’ve reported quite a few shit heads there and it gets handled well and quickly. You have no way of knowing if some roach is gonna die after getting squashed or if they are going to keep coming back

Poppa_Mo,

This is flat out disgusting. Extremely questionable someone having an arsenal of this crap to spread to begin with. I hope they catch charges.

Whitehat93875,
@Whitehat93875@lemmy.world avatar

deleted_by_author

  • Loading...
  • Piecemakers3Dprints,
    @Piecemakers3Dprints@lemmy.world avatar

    You must realize the logical fallacy in that statement, right?

    HawlSera,

    See that’s the part of this that bothers me most… Why do they have so much of it? Why do they feel comfortable letting others know they have so much of it? Why are they posting it on an open forum?

    The worst part is, there is not a single god damn answer to ANY of those that wouldn’t keep a sane person up at night… shudder

    mayo,

    I’m sure it’s not hard to find on the dark web. Child porn is one of those horrible things that is probably a lot more widespread than anyone wants to know.

    I don’t really get why they are doing this though.

    HawlSera,

    I hate to get conspiratorial, but it’s possible Reddit paid some people to do this to snuff out the competition.

    Isn’t Spez a pedophile?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • test
  • [email protected]
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • KbinCafe
  • TheResearchGuardian
  • Ask_kbincafe
  • Socialism
  • feritale
  • oklahoma
  • SuperSentai
  • KamenRider
  • All magazines