GorillasAreForEating

@[email protected]

This profile is from a federated server and may be incomplete. Browse more on the original instance.

GorillasAreForEating, (edited )

This round of Brent Dill drama is coming at an inopportune time, I’m probably going to be unavailable for the next two weeks.

Anyways I’m not surprised Brent had a prior conviction when he was 20 given that he posted about seeing an underage girl around that time. I haven’t found the background check post that they’re referring to yet.

I wonder what else we’d be able to find about these people if we looked into court records and the like.

GorillasAreForEating,

No, they’re able to grasp the near term risks, they just don’t want that to get in the way of making money because they know they’re unlikely to be affected.

GorillasAreForEating,

It’s worth noting that miricult.com went live about a year after Yudkowsky posted that.

GorillasAreForEating,

Yudkowsky is pretty open about being a sexual sadist

GorillasAreForEating,

I did not. Got any details?

Also FWIW I discovered this yesterday: archive.ph/SFCwS

No idea if it’s true, but even if so I don’t think it would exonerate him (though it would put Aella in a worse light)

GorillasAreForEating,

What tipped you off? The phrase “unholy union”?

Anyways the occultism stuff is pretty common among “post-rats”.

GorillasAreForEating,

Since they brought up Kathy Forth I’d just like to remind everyone within a few weeks of Kathy’s death it was revealed that they had in fact known about the accusations against Brent Dill.

GorillasAreForEating,

Naming the Jude Law’s character in Gattaca “Eugene” was not very subtle.

GorillasAreForEating,

What’s “that whole eudaimonia thing from a while back”? (I’m familiar with the concept of eudaimonia in general, but I’m not sure what you’re referring to)

GorillasAreForEating, (edited )

Weirdly rationalists also sometimes read this book and take all the wrong lessons from it.

Scott Alexander is a crypto-reactionary and I think he reviewed it as a way to expose his readers to neoreactionary ideas under the guise of superficial skepticism, in the same manner as the anti-reactionary FAQ. The book’s author might be a anarchist but a lot of the arguments could easily work in a libertarian context.

GorillasAreForEating,

Here’s the old sneerclub thread about the leaked emails linking Scott Alexander to the far right

Scott Alexander’s review of Seeing Like A State is here: slatestarcodex.com/…/book-review-seeing-like-a-st…

The review is mostly positive, but then it also has passages like this:

Well, for one thing, [James C.] Scott basically admits to stacking the dice against High Modernism and legibility. He admits that the organic livable cities of old had life expectancies in the forties because nobody got any light or fresh air and they were all packed together with no sewers and so everyone just died of cholera. He admits that at some point agricultural productivity multiplied by like a thousand times and the Green Revolution saved millions of lives and all that, and probably that has something to do with scientific farming methods and rectangular grids. He admits that it’s pretty convenient having a unit of measurement that local lords can’t change whenever they feel like it. Even modern timber farms seem pretty successful. After all those admissions, it’s kind of hard to see what’s left of his case.

and

Professors of social science think [check cashing] shops are evil because they charge the poor higher rates, so they should be regulated away so that poor people don’t foolishly shoot themselves in the foot by going to them. But on closer inspection, they offer a better deal for the poor than banks do, for complicated reasons that aren’t visible just by comparing the raw numbers. Poor people’s understanding of this seems a lot like the metis that helps them understand local agriculture. And progressives’ desire to shift control to the big banks seems a lot like the High Modernists’ desire to shift everything to a few big farms. Maybe this is a point in favor of something like libertarianism?

GorillasAreForEating, (edited )

the cell’s ribosomes will transcribe mRNA into a protein. It’s a little bit like an executable file for biology.

Also, because mRNA basically has root level access to your cells, your body doesn’t just shuttle it around and deliver it like the postal service. That would be a major security hazard.

I am not saying plieotropy doesn’t exist. I’m saying it’s not as big of a deal as most people in the field assume it is.

Genes determine a brain’s architectural prior just as a small amount of python code determines an ANN’s architectural prior, but the capabilities come only from scaling with compute and data (quantity and quality).

When you’re entirely shameless about your Engineer’s Disease

GorillasAreForEating,

Looking forward to LW articles with titles like “Ashkenazification via engineered viruses as a solution for African poverty: here’s why it might work”

GorillasAreForEating,

And yet the market is said to be “erring” and to have “irrationality” when it disagrees with rationalist ideas. Funny how that works.

GorillasAreForEating,

Old news obviously, but I think it’s worth documenting the organizations and dollar amounts

even Steven Pinker is coming out against EA now: twitter.com/sapinker/status/1732114240666743102

GorillasAreForEating,

I intensely dislike him. I’d say his views rhyme with those of the rationalists, so to speak, which is why I think it’s noteworthy that even a guy like him is criticizing EA now.

GorillasAreForEating,

Yeah, that thought had occurred to me, which is why I want to expose as much of it as I can find.

GorillasAreForEating,

evidently I need to pay more attention to the non-sneerclub sections of this site.

GorillasAreForEating, (edited )
GorillasAreForEating, (edited )

I suppose when talking about science to a popular audience it can be hard not to make generalizations and oversimplifications and if it’s done poorly that oversimplification can cross over into plain old inaccuracy (if I were to be charitable to Yud I would say that this is what happened here).

To wit: even the “K’nex connector with 4 ports” model of carbon doesn’t really explain the bonding of aromatic molecules like benzene or carbon nanotubes; I’ve likewise seen people confidently make the generalization “noble gases don’t react”, apparently unaware of the existence of noble gas compounds.

GorillasAreForEating,

His argument, as I understand it, is that he knew about the covalent bonds between proteins but didn’t mention them because he was simplifying things for a lay audience, and that those covalent bonds don’t matter because they aren’t the “load bearing” elements in flesh.

There are two problems I see

  1. His earlier statements suggest he actually had no knowledge of that whatsoever
  2. I think his revised explanation is still wrong, because the extracellular matrix that holds cells together and connective tissue are composed largely of proteins that have these covalent crosslinks and rely on them for strength. When you tear a ligment it’s not just van der waals and hydrogen bonds being broken, those alone would be far too weak.
GorillasAreForEating,

The so called “experts” say that spider silk is stronger than steel, but steel beams can hold up bridges while I can break a spider web with my little finger. Looks like the “experts” are wrong and spider silk isn’t very strong after all - probably because it’s made of proteins held together by weak van der Waals forces instead of covalent bonds.

GorillasAreForEating,

Thank you, that link is exactly what I was looking for (and also sated my curiosity about how Yudkowsky got involved with Bostrom and Hanson, I had heard they met on the extropian listserv but I had never seen any proof).

GorillasAreForEating,

He’s been doing interviews on podcasts. The NYT also recently listed “internet philosopher” Eliezer Yudkowsky as one of the key figures of the modern artificial intelligence movement.

Sadly he did not wear a fedora in his official NYT picture

GorillasAreForEating,

“Our goal is really to increase the scope and scale of civilization as measured in terms of its energy production and consumption,” h

old and busted: paperclip maximizer

new hotness: entropy maximizer

GorillasAreForEating, (edited )

I highly suspect the voice analysis thing was just to confirm what they already knew, otherwise it would have been like looking for a needle in a haystack.

People on twitter have been speculating that someone who knew him simply ratted him out.

GorillasAreForEating,

lol, I just got lucky and happened to find out about the article about 20 minutes after it was published.

GorillasAreForEating,

I still find it amusing that Siskind complained about being “doxxed” when he used his real first and middle name.

GorillasAreForEating,

update: Verdon is now accusing another AI researcher of exposing him: twitter.com/GillVerd/status/1730796306535514472

GorillasAreForEating,

It’s like when he wore a fedora and started talking about 4chan greentexts in his first major interview. He just cannot help himself.

P.S. The New York Times recently listed “internet philosopher” Eliezer Yudkowsky as one of the of the major figures in the modern AI movement, this is the picture they chose to use.

GorillasAreForEating,

I probably should have just told people to skip the article prior to the part that I quoted, I agree most of it was very boring.

GorillasAreForEating,

Yeah, the fact that they said she was committed to “AI safety” (instead of “AI ethics”) was a misstep, Wired has published her take on “AI safety” before but maybe they haven’t realized how contentious those two terms have become (or just forgot).

GorillasAreForEating,

He’s the one cultist who actually accomplished something

GorillasAreForEating,

I hate to say it, but even sneerclub can get a bit biased and tribal sometimes. He who fights with monsters and so on

I suspect watching the rationalists as they bloviate and hype themselves up and repeatedly fail for years on end have lulled people into thinking that they can’t do anything right, but I think that’s clearly not the case anymore. Despite all the cringe and questionable ethics, OpenAI has made a real and important accomplishment.

They’re in the big leagues now. We should not underestimate the enemy.

GorillasAreForEating, (edited )

The accomplishment I’m referring to is creating GPT/DALL-E. Yes, it’s overhyped, unreliable, arguably unethical and probably financially unsustainable, but when I do my best to ignore the narratives and drama surrounding it and just try out the damn thing for myself I find that I’m still impressed with it as a technical feat. At the very, very least I think it’s a plausible competitor to google translate for the languages I’ve tried, and I have to admit I’ve found it to be actually useful when writing regular expressions and a few other minor programming tasks.

In all my years of sneering at Yud and his minions I didn’t think their fascination with AI would amount to anything more than verbose blogposts and self-published research papers. I simply did not expect that the rationalists would build an actual, usable AI instead of merely talking about hypothetical AIs and pocketing the donor money, and it is in this context that I say I underestimated the enemy.

With regards to “mocking the promptfans and calling them names”: I do think that ridicule can be a powerful weapon, but I don’t think it will work well if we overestimate the actual shortcomings of the technology. And frankly sneerclub as it exists today is more about entertainment than actually serving as a counter to the rationalist movement.

GorillasAreForEating,

Very well said, thank you.

GorillasAreForEating,

I suppose the goalpost shifting is my fault, the original comment was about Sutskever but I shifted talking about OpenAI in general, in part because I don’t really know to what extent Sutskever is individually responsible for OpenAI’s tech.

also mock anyone who says it’s OK for them to act this way because they have a gigantic IQ.

I think people are missing the irony in that comment.

GorillasAreForEating,

The problem here is that “AI” is a moving target, and what “building an actual, usable AI” looks like is too. Back when OpenAI was demoing DOTA-playing bots, they were also building actual, usable AIs.

For some context: prior to the release of chatGPT I didn’t realize that OpenAI had personnel affiliated with the rationalist movement (Altman, Sutskever, maybe others?), so I didn’t make the association, and i didn’t really know about anything OpenAI did prior to GPT-2 or so.

So, prior to chatGPT the only “rationalist” AI research I was aware of were the non-peer reviewed (and often self-published) theoretical papers that Yud and MIRI put out, plus the work of a few ancillary startups that seemed to go nowhere.

The rationalists seemed to be all talk and no action, so really I was surprised that a rationalist-affiliated organization had any marketable software product at all, “AI” or not.

and FWIW I was taught a different definition of AI when I was in college, but it seems like it’s one of those terms that gets defined different ways by different people.

GorillasAreForEating,

I think there’s a non-ironic element too. Sutskever can be both genuinely smart and weird cultist; just because someone is smart in one domain doesn’t mean they aren’t immensely foolish in others.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • uselessserver093
  • Food
  • aaaaaaacccccccce
  • test
  • CafeMeta
  • testmag
  • MUD
  • RhythmGameZone
  • RSS
  • dabs
  • KamenRider
  • TheResearchGuardian
  • KbinCafe
  • Socialism
  • oklahoma
  • SuperSentai
  • feritale
  • All magazines