Comments

This profile is from a federated server and may be incomplete. Browse more on the original instance.

TerribleMachines, to sneerclub in ‘Before its too late buddy’: A Code Red Warning about TESCREALism

Best not to for exactly that reason but I know I wasn’t the only one who experienced it by any means!

TerribleMachines, to sneerclub in ‘Before its too late buddy’: A Code Red Warning about TESCREALism

😅 honestly I don’t know what else to say, the memory haunts me to this day. I think it was the point when I started going “huh, the rats make weirdly dumb mistakes considering they’ve made posts exactly about these kinds of error” to “wait, there’s something really sinister going on here”

TerribleMachines, to sneerclub in ‘Before its too late buddy’: A Code Red Warning about TESCREALism

Good point with the line! Some of the best liars are good at pretending to themselves they believe something.

I don’t think its widely known, but it is known, (old sneeeclub posts about it somwhere) that he used to feed the people he was dating LSD and try to convince them they “depended” on him.

First time I met him, in a professional setting, he had his (at the time) wife kneeling at his feet wearing a collar.

Do I have hard proof he’s a criminal? Probably not, at least not without digging. Do I think he is? Almost certainly.

TerribleMachines, to sneerclub in I am extremely curious what the general take around here is on the Singulairty

Yeah, this post (edit: “comment”, the original post does not spark joy) sparked joy for me too (my personal cult lingo is from Marie Kondo books, whatcha gonna do)

One of my takes is that the “AI alignment” garbage is way less of a problem than “Human Alignment” i.e. how to get humans to work together and stop being jerks all the time. Absolutely wild that they can’t see that, except perhaps when it comes to trying to get other humans to give them money for the AIpocalype.

TerribleMachines, to sneerclub in I am extremely curious what the general take around here is on the Singulairty

Preach, as someone inside academia, the bullcrap is real. I very rarely read a paper that hasn’t got a major stats issue—an academic paper is only worth something if you understand it enough to know how wrong it is or there’s plenty of replication/related work building on it, ideally both. (And it’s a technical field with an objective measure of truth but don’t let my colleagues in humanities hear me say that—its not that their work is worthless, its just its not reliable.)

TerribleMachines, to sneerclub in ‘Before its too late buddy’: A Code Red Warning about TESCREALism

It’s true, I’m terrible for it myself 😅

TerribleMachines, to sneerclub in ‘Before its too late buddy’: A Code Red Warning about TESCREALism

My perspective is a little different (from having met him), I think he genuinely believed a lot of what he said at one point at least … but you’re pretty much spot on in all the ways that matter, he’s a really bad person of the should probably be in jail for crimes kind.

TerribleMachines, to sneerclub in ‘Before its too late buddy’: A Code Red Warning about TESCREALism

As you were being pedantic, allow me to be pedantic in return.

Admittedly, you might know something I don’t, but I would describe Andrew Ng as an academic. These kinds of industry partnerships, like the one in that article you referred to, are really, really common in academia. In fact, it’s how a lot of our research gets done. We can’t do research if we don’t have funding, and so a big part of being an academic is persuading companies to work with you.

Sometimes companies really, really want to work with you, and sometimes you’ve got to provide them with a decent value proposition. This isn’t just AI research either, but very common in statistics, as well as biological sciences, physics, chemistry, well, you get the idea. Not quite the same situation in humanities, but eh, I’m in STEM.

Now, in terms of universities having the hardware, certainly these days there is no way a university will have even close to the same compute power that a large company like Google has access to. Though, “even back in” 2012, (and well before) universities had supercomputers. It was pretty common to have a resident supercomputer that you’d use. For me, and my background’s orginally in physics, back then we had a supercomputer in our department, the only one at the university, and people from other departments would occasionally ask to run stuff on it. A simpler time.

It’s less that universities don’t have access to that compute power. It’s more that they just don’t run server farms. So we pay for it from Google or Amazon and so on, like everyone in the corporate world—except of course the companies that run those servers (they still have to pay costs and lost revenue). Sometimes that’s subsidized by working with a big tech company, but it isn’t always.

I’m not even going to get into the history of AI/ML algorithms and the role of academic contributions there, and I don’t claim that the industry played no role; but the narrative that all these advancements are corporate just ain’t true, compute power or no. We just don’t shout so loud or build as many “products.”

Yeah, you’re absolutely right that MIRI didn’t try any meaningful computation experiments that I’ve seen. As far as I can tell, their research record is… well, staring at ceilings and thinking up vacuous problems. I actually once (when I flirted with the cult) went to a seminar that the big Yud himself delivered, and he spent the whole time talking about qualia, and then when someone asked him if he could describe a research project he was actively working on, he refused to, on the basis that it was “too important to share.”

“Too important to share”! I’ve honestly never met an academic who doesn’t want to talk about their work. Big Yud is a big let down.

TerribleMachines, to sneerclub in LessWrong classics: “A Bayesian superintelligence, hooked up to a webcam of a falling apple, would invent general relativity by the third frame”

Love this!

Alas, if Yud took an actual physics class, he wouldn’t be able to use it as the poorly defined magic system for his OC doughnut-steal IRL bayesian superintelligence fanfic.

TerribleMachines, to sneerclub in ‘Before its too late buddy’: A Code Red Warning about TESCREALism

My worry in 2021 was simply that the TESCREAL bundle of ideologies itself contains all the ingredients needed to “justify,” in the eyes of true believers, extreme measures to “protect” and “preserve” what Bostrom’s colleague, Toby Ord, describes as our “vast and glorious” future among the heavens.

Golly gee, those sure are all the ingredients for white supremacy these folk are playing around with what, good job there are no signs of racism… right, right!!!

In other news, I find it wild that big Yud has gone on an arc from “I will build an AI to save everyone” to “let’s do a domestic terrorism against AI researchers.” He should be careful, someone might this this is displaced rage at his own failure to make any kind of intellectual progress while academic AI researchers have passed him by.

(Idk if anyone remembers how salty he was when AlphaGo showed up and crapped all over his “symbolic AI is the only way” mantra, but it’s pretty funny to me that the very group of people he used to say were incompetent are a “threat” to him now they’re successful. Schoolyard bully stuff and wotnot.)

TerribleMachines, to sneerclub in ‘Before its too late buddy’: A Code Red Warning about TESCREALism

Having lurked for a long time, sneerclub is aimed at people who already have a good idea of the horror of TESCREAL groups—the point isn’t to attract new members, but catharsis for those of us that have had to deal with the TechBros/Facists etc.

and for sneering, the sneering is important.

Getting real for a moment, for me, I used to be in deep with these people and then my friends in the community commited suicide due the rampant sexual abuse and I got the hell out. Sneer club was the only place the reports of assault were taken seriously, while the TESCREALs all closed ranks.

It’s all a way back for me now, but I love this place. That there is a tiny part of the Internet out there that calls these people on their shit and sneers gives me so much peace.

(For sneerclubbers reading this; thanks folks, you’re the best! ✨️)

TerribleMachines, to sneerclub in this year's "hmm, actually this is bad" post: Eliezer Yudkowsky Is Frequently, Confidently, Egregiously Wrong

At the risk of being NSFW.

When I met Yud some years ago, I asked him how he goes about learning new things, his answer was roughly: “Scroll on Facebook until I find someone who has written about it.” Maybe he actually read some of the sources he references a long time ago but I think he gave up on learning new things and has sat comfortably abusing his power over the community.

Egads these people are gross.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • KamenRider
  • Testmaggi
  • KbinCafe
  • Ask_kbincafe
  • TheResearchGuardian
  • Socialism
  • oklahoma
  • SuperSentai
  • feritale
  • All magazines