What a bunch of monochromatic, hyper-privileged, rich-kid grifters. It’s like a nonstop frat party for rich nerds. The photographs and captions make it obvious:
The gang going for a hiking adventure with AI safety leaders. Alice/Chloe were surrounded by a mix of uplifting, ambitious entrepreneurs and a steady influx of top people in the AI safety space.
The gang doing pool yoga. Later, we did pool karaoke. Iguanas everywhere.
Alice and Kat meeting in “The Nest” in our jungle Airbnb.
Alice using her surfboard as a desk, co-working with Chloe’s boyfriend.
The gang celebrating… something. I don’t know what. We celebrated everything.
Alice and Chloe working in a hot tub. Hot tub meetings are a thing at Nonlinear. We try to have meetings in the most exciting places. Kat’s favorite: a cave waterfall.
Alice’s “desk” even comes with a beach doggo friend!
Working by the villa pool. Watch for monkeys!
Sunset dinner with friends… every day!
These are not serious people. Effective altruism in a nutshell.
According to a comment, she apparently claimed on Facebook that, due to her post, “around 75% of people changed their minds based on the evidence!”
After someone questioned how she knew it was 75%:
Update: I changed the wording of the post to now state: 𝗔𝗿𝗼𝘂𝗻𝗱 𝟳𝟓% 𝗼𝗳 𝗽𝗲𝗼𝗽𝗹𝗲 𝘂𝗽𝘃𝗼𝘁𝗲𝗱 𝘁𝗵𝗲 𝗽𝗼𝘀𝘁, 𝘄𝗵𝗶𝗰𝗵 𝗶𝘀 𝗮 𝗿𝗲𝗮𝗹𝗹𝘆 𝗴𝗼𝗼𝗱 𝘀𝗶𝗴𝗻*
And the * at the bottom says: Did some napkin math guesstimates based on the vote count and karma. Wide error bars on the actual ratio. And of course this is not proof that everybody changed their mind. There’s a lot of reasons to upvote the post or down vote it. However, I do think it’s a good indicator.
She then goes on to talk about how she made the Facebook post private because she didn’t think it should be reposted in places where it’s not appropriate to lie and make things up.
At various points, on Twitter, Jezos has defined effective accelerationism as “a memetic optimism virus,” “a meta-religion,” “a hypercognitive biohack,” “a form of spirituality,” and “not a cult.” …...
Reading his timeline since the revelation is weird and creepy. It’s full of SV investors robotically pledging their money (and fealty) to his future efforts. If anyone still needs evidence that SV is a hive mind of distorted and dangerous group-think, this is it.
Molly White is best known for shining a light on the silliness and fraud that are cryptocurrency, blockchain and Web3. This essay may be a sign that she’s shifting her focus to our sneerworthy friends in the extended rationalism universe. If so, that’s an excellent development. Molly’s great.
One of the easiest ways to get downvoted on the orange site is to say anything even mildly critical of Scott Alexander Siskind. It’s really amusing how much respect there is for him there.
Lots of fascinating links in this article. This link in particular was fascinating:
If you’re searching for Scott Siskind… I am Scott Siskind from Ann Arbor, Michigan. There used to be more things on this webpage. Right now I’m using it to spread the message that there are multiple statements being falsely attributed to me on the Internet. Somebody who doesn’t like me - I am not sure who, but I work in mental health and guess this is sort of a professional hazard - has been trying to systematically discredit me by posting racist and profanity-laden things under my name. Some of the comments make some effort to convince, like linking back to my website. The end result is that if you Google me to try to find out what I am like, you will probably end up seeing angry racist profanity-laden comments made under my name. These are not mine.
Does anyone know the backstory here? This reads to me like a “hackers ate my password” story – the kind of ass-covering someone might concoct after their racist writings accidentally leaked onto the internet.
EDIT: This seems to be related to the stuff Topher Brennan revealed? Except it was written many years before Topher’s revelations. It’s confusing…
“ We have unusually strong marketing connections; Vitalik approves of us; Aella is a marketing advisor on this project; SlateStarCodex is well aware of us. We are quite networked in the Effective Altruism space. We could plausibly get an Elon tweet. ”...
I wonder if he’s ever applied this advice to himself. Because one could argue that trauma was a significant factor in his obsession with transhumanism and the singularity.
When Yud’s younger brother died tragically at age 19, it clearly traumatized him. In this case, X was “the death of my little brother”. From this he learned Y: to be angry and fearful of death (“You do not make peace with Death!”). His fascination with the singularity can be seen in this light as a wish to cheat death, while his more recent AI doomerism is the singularity’s fatalistic counterpart: an eschatological distortion and acceleration of the reality that death comes for us all.
My attention span is not what it used to be, and I couldn’t force myself to get to the end of this. A summary or TLDR (on the part of the original author) would have been helpful.
What is it with rationalists and their inability to write with concision? Is there a gene for bloviation that also predisposes them to the cult? Or are they all just mimicking Yud’s irritating style?
This part of the first comment got an audible guffaw out of me:
I think that there’s been a failure to inhabit the least convenient possible world°, and the general distribution over possible outcomes, and correspondingly attempt to move to the pareto-frontier of outcomes assuming that distribution.
Is it wrong to hope they manage to realize one of these libertarian paradise fantasies? I’d really love to see how quickly it devolves into a Mad Max Thunderdome situation.
Random blue check spouts disinformation about “seed oils” on the internet. Same random blue check runs a company selling “safe” alternatives to seed oils. Yud spreads this huckster’s disinformation further. In the process he reveals his autodidactically-obtained expertise in biology:
Are you eating animals, especially non-cows? Pigs and chickens inherit linoleic acid from their feed. (Cows reprocess it more.)
Yes, Yud, because that’s how it works. People directly “inherit” organic molecules totally unmetabolized from the animals they eat.
I don’t know why Yud is fat, but armchair sciencing probably isn’t going to fix it.
Take the sequence {1,2,3,4,x}. What should x be? Only someone who is clueless about induction would answer 5 as if it were the only answer (see Goodman’s problem in a philosophy textbook or ask your closest Fat Tony) [Note: We can also apply here Wittgenstein’s rule-following problem, which states that any of an infinite number of functions is compatible with any finite sequence. Source: Paul Bogossian]. Not only clueless, but obedient enough to want to think in a certain way.
Also this:
If, as psychologists show, MDs and academics tend to have a higher “IQ” that is slightly informative (higher, but on a noisy average), it is largely because to get into schools you need to score on a test similar to “IQ”. The mere presence of such a filter increases the visible mean and lower the visible variance. Probability and statistics confuse fools.
And:
If someone came up w/a numerical“Well Being Quotient” WBQ or “Sleep Quotient”, SQ, trying to mimic temperature or a physical quantity, you’d find it absurd. But put enough academics w/physics envy and race hatred on it and it will become an official measure.
Been waiting to come back to the steeple of the sneer for a while. Its good to be back. I just really need to sneer, this ones been building for a long time....
In theory, a prediction market can work. The idea is that even though there are a lot of uninformed people making bets, their bad predictions tend to cancel each other out, while the subgroup of experts within that crowd will converge on a good prediction. The problem is that prediction markets only work when they’re ideal. As soon as the bettor pool becomes skewed by a biased subpopulation, they stop working. And that’s exactly what happens with the rationalist crowd. The main benefit rationalists obtain from prediction markets and wagers is an unfounded confidence that their ideaas have merit. Prediction markets also have a long history in libertarian circles, which probably also helps explain why rationalists are so keen on them.
I used to enjoy Ariely’s books and others like him before I started reading better stuff. All that behavioural economics genre seems to be a good example of content that holds up as long as you don’t read any more on the subject.
If Books Could Kill is great. I believe the first podcast was about Freakonomics, another one of those incredibly popular books based on behavioral economics. They took it apart.
Nonlinear seem to think this post replying to the accusations about them will make them look like the heroes (forum.effectivealtruism.org)
warning: seriously nasty narcissism at length...
Let rationalists put GMO bacteria in your mouth (www.astralcodexten.com)
They’ve been pumping this bio-hacking startup on the Orange Site ™ for the past few months. Now they’ve got Siskind shilling for them.
"it’s like the stages of a rocket ship and racism was the first stage" (awful.systems)
Image taken from this tweet: twitter.com/softminus/status/1732597516594462840...
Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement? (www.forbes.com)
At various points, on Twitter, Jezos has defined effective accelerationism as “a memetic optimism virus,” “a meta-religion,” “a hypercognitive biohack,” “a form of spirituality,” and “not a cult.” …...
Ex-OpenAI board member Tasha McCauley is deep state because she married Joseph Gordon-Levitt (www.lesswrong.com)
Found this because an article on Helen Toner popped up in my feed and I wanted to find out more, and boy did I find out more.
Effective Obfuscation (newsletter.mollywhite.net)
Molly White is best known for shining a light on the silliness and fraud that are cryptocurrency, blockchain and Web3. This essay may be a sign that she’s shifting her focus to our sneerworthy friends in the extended rationalism universe. If so, that’s an excellent development. Molly’s great.
Eliezer "8.5% more cheerful about OpenAI going forward" with Mira Murati at the helm (nitter.net)
Not 7.5% or 8%. 8.5%. Numbers are important.
Cold viruses and bitcoin mining oh noes (nitter.net)
Blood Music was way cooler then this just saying.
Good Guy Orange Site refuses to believe rationalists/EAs can be as bad as we're describing and is sure we're just exaggerating (news.ycombinator.com)
Let's walk through the uncanny valley with SBF so we can collapse some wave functions together (hachyderm.io)
Rationalist check-list:...
18+ Box Of Rocks on present day scientific racists, especially Scott Alexander Siskind (medium.com)
archive: archive.ph/rfsUY
Aella and company want to put GM bacteria in your mouth (www.lanternbioworks.com)
“ We have unusually strong marketing connections; Vitalik approves of us; Aella is a marketing advisor on this project; SlateStarCodex is well aware of us. We are quite networked in the Effective Altruism space. We could plausibly get an Elon tweet. ”...
The rise of the new tech right (archive.ph)
Caught the bit on lesswrong and figured you guys might like.
If learning incorrect things is EY's only definition of trauma, his existence must be eternal torment. (nitter.net)
source nitter link...
Rationalist posts detailed catalogue of confirmed bad behaviour by EA/rationalist org Nonlinear. Second rationalist goes meta on first post: how can we even know anything, it's so unfair to Nonlinear
original post detailing mistreatment of employees...
Praxis, the Iron Pill advocates who want to start a neoreactionary charter city. Funded by friend of mankind Peter Thiel, of course (www.motherjones.com)
that time the rationalists decided that I, personally, was why anyone called them a "cult", and not, say, anything they said or did themselves (reddragdiva.tumblr.com)
you have to read down a bit, but really, I’m apparently still the Satan figure. awesome.
Yudkowsky advises his fellow Effective Altruists to take the FTX money and run. For the sake of charity, you understand. (forum.effectivealtruism.org)
Yud goes full seed oil-ist (nitter.net)
IQ is largely a pseudoscientific swindle (medium.com)
Taleb dunking on IQ “research” at length. Technically a seriouspost I guess.
"Yudkowsky is a genius and one of the best people in history." (archive.ph)
[All non-sneerclub links below are archive.today links]...
Anybody else genuinely hate the 'wagers' these guys gush about ALL THE TIME
Been waiting to come back to the steeple of the sneer for a while. Its good to be back. I just really need to sneer, this ones been building for a long time....
I will never get over how the pretty girl in the photo attached is LITERALLY Roko of the Basilisk's example of the bad ending for humanity (awful.systems)
really: archive.ph/p0jPI...
Behavioural economics never smelled right to me (www.science.org)
I used to enjoy Ariely’s books and others like him before I started reading better stuff. All that behavioural economics genre seems to be a good example of content that holds up as long as you don’t read any more on the subject.