I somehow missed this one until now. Apparently it was once mentioned in the comments on the old sneerclub but I don’t think it got a proper post, and I think it deserves one.
As opposed to normalizing it by talking about it all the god damned time, for example by using it in your thought experiments instead of literally any other criminal compulsion.
I cannot wade through this, so I’m just scrolling around aimlessly.
About 10% of the time was doing laundry, groceries, packing, and cooking - and she has to do many of those things for herself anyways! At least this is on paid time, feels high impact, and means she’s not sitting in front of the computer all day.
Feels high impact wtf.
“First they came for one EA leader, and I did not speak out – because I just wanted to focus on making AI go well.
Then they came for another, and I did not speak out – because surely these are just the aftershocks of FTX, it will blow over.
Then they came for another, and I still did not speak out – because I was afraid for my reputation if they came after me.
Then they came for me - and I have no reputation to protect anymore.”
How very tasteful, a Niemöller snowclone Godwin. Truly people who party on the beach for charity and have hot tub meetings are the most oppressed.
Maybe it was because Alice was microdosing LSD nearly every day, sleeping just a few hours a night, and has a lifelong pattern of seeing persecution everywhere.
What an insane way to talk about a former employee, much less one living with you. Pro tip for real businesses: never do this. If you’re going to disparage someone like this, it’s a job for your lawyer and he’d better have receipts. Also don’t live with your employees and let them take acid on the job.
Have you ever made a social gaff? Does the idea of somebody exclusively looking for and publishing negative things about you make you feel uneasy? Terrified?
(spooky hands)
I actually played this game with some of my friends to see how easy it was. I tried to say only true things but in a way that made them look like villains. It was terrifyingly easy. Even for one of my oldest friends, who is one of the more universally-liked EAs, I could make him sound like a terrifying creep.
📸 🤨
I could do this for any EA org. I know of so many conflicts in EA that if somebody pulled a Ben Pace on, it would explode in a similar fashion.
I wish you were joking but they literally say this in the post:
At that hourly rate, he spent perhaps ~$130,000 of Lightcone donors’ money on [investigating us]. But it’s more than that. When you factor in our time, plus hundreds/thousands of comments across all the posts, it’s plausible Ben’s negligence cost EA millions of dollars of lost productivity. If his accusations were true, that could have potentially been a worthwhile use of time - it’s just that they aren’t, and so that productivity is actually destroyed. […]
Even if it was just $1 million, that wipes out the yearly contribution of 200 hardworking earn-to-givers who sacrificed, scrimped and saved to donate $5,000 this year.
Think of how many souls could have been saved if you didn’t waste the church’s time investigating all those abusive priests! Truly the investigators are sinners on a staggering scale!
Do they address the allegation that Drew was sleeping with Alice and Kat was harassing her about it? That was the most damning allegation in the original wasn’t it?
Ok, I’ve spent way too much time on this, but read the “Sharing Information on Ben Pace” section. As some of the comments point out, it could be written about Kat’s own reaction to the earlier article.
Utilitarian brainworms or one of the many very real instances of a homicidal parent going after their disabled child? I can’t decide, but it’s a depressing read....
This is gonna be really helpful next time someone tells me straight up that EA and Rationalist are totally different things and just overlap by coincidence.
At various points, on Twitter, Jezos has defined effective accelerationism as “a memetic optimism virus,” “a meta-religion,” “a hypercognitive biohack,” “a form of spirituality,” and “not a cult.” …...
Molly White is best known for shining a light on the silliness and fraud that are cryptocurrency, blockchain and Web3. This essay may be a sign that she’s shifting her focus to our sneerworthy friends in the extended rationalism universe. If so, that’s an excellent development. Molly’s great.
some of them are so afraid of paperclip generators while they don’t realize that they themselves can easily be made to believe horrible things if you just back it up with enough stats, papers and contrarian blog posts.
I think that’s why they’re so afraid of them. They think that paperclip optimizer is the default outcome for an intelligence because it’s the outcome they’re pushing themselves towards.
I just can’t imagine the board saying he lied to them unless they have some way to back it up. If they made that up couldn’t they be on the wrong end of a wrongful termination lawsuit?
Please forgive me if I fail to address it in a sufficiently sensitive way, and know that this was not my intention. There is, of course, so much more to say about this, but I wanted to try and keep the post relatively short.
(Proceeds to write 5000 word insensitive essay anyway)
Counting calories and abiding by your commitment to eat fewer of them is hard, hard enough that while it worked for me I can’t just recommend it to everyone. Like yeah no shit eat less food. It’s really hard to admit that there are limits to what people can do with willpower alone, especially if you live in a subculture where you think that being galaxy brained should allow you to do literally anything. There must be some reason, any reason, besides people not being fully 1000% in control of their actions.
What’s his proposed mechanism for it to suddenly start deriving all of biology from first principles?
He considers deriving stuff from first principles much more versatile than it actually is. That and he really believes in the possibility of using simulations for anything.
While I love blaming slatescott for things, I do think there’s maybe a deeper story to the fascination with addies than slatescott blogging about it once.
A lot of millennials were prescribed stimulants as kids, enough that we have some level of folk knowledge about them. In Adderall Risks he more or less admits to handing them out like candy and he is far from the only (lol ex) psychiatrist to do so.
The article, while clearly endorsing stimulants as a safe nootrooic that everyone should take (and is good for the world now let me munch a few more pills 💊), is actually more of an apologia to convince people who are already using stimulants that no harm will come to them. Sure there’s the usual amount of discovering an apple pie from scratch new atheist libertarian bloviating that obscures it, but he does that about everything.*
One funny aspect of his ‘stimulants are required for modern work’ argument is that he’s basically endorsing the social model of disability, though more recently he has decided that expressing ableism to own the libs is more important than being correct.
*Except if he wants to sneak in an idea without you thinking about it. Those will usually be the hardcore nrx ones.
I was wasn’t expecting a serious treatment of this very silly idea, my mistake. I submit that it would cause enough difficult to diagnose bugs while just messing with it that you would never get into ‘but are the builds reproducible’ territory.
"AI alignment" really is just about working out their shared world sci fi collaboration (www.lesswrong.com)
an entirely vibes-based literary treatment of an amateur philosophy scary campfire story, continuing in the comments
"As always, pedophilia is not the same as ephebophilia." - Eliezer Yudkowsky, actual quote (www.lesswrong.com)
I somehow missed this one until now. Apparently it was once mentioned in the comments on the old sneerclub but I don’t think it got a proper post, and I think it deserves one.
The SSC subreddit ponders the difference between Bayesianism and plain old bias (old.reddit.com)
"Successful people create companies. More successful people create countries. The most successful people create religions." (blog.samaltman.com)
From Sam Altman’s blog, pre-OpenAI
Holy shit, it could not be getting any dumber, Effective altruist philosopher defends longtermism from critics who deny utilitarian principles.... With utilitarian principles. (globalprioritiesinstitute.org)
WOOOOOOO MORE AXE GRINDING LETS GO!...
Nonlinear seem to think this post replying to the accusations about them will make them look like the heroes (forum.effectivealtruism.org)
warning: seriously nasty narcissism at length...
LW: [Request]: Use "Epilogenics" instead of "Eugenics" in most circumstances - people just don't like the word itself yes that must be it. Coined by Aella. (www.lesswrong.com)
archive archive.is/8NW7e
SSC Reddit having a normal one, OP fantasizing about murdering their child, I guess? (www.reddit.com)
Utilitarian brainworms or one of the many very real instances of a homicidal parent going after their disabled child? I can’t decide, but it’s a depressing read....
Let rationalists put GMO bacteria in your mouth (www.astralcodexten.com)
They’ve been pumping this bio-hacking startup on the Orange Site ™ for the past few months. Now they’ve got Siskind shilling for them.
Effective Altruism Funded the “AI Existential Risk” Ecosystem with Half a Billion Dollars (www.aipanic.news)
deleted_by_author
To what extent did Eliezer Yudkowsky invent the Effective Altruist movement? (forum.effectivealtruism.org)
I was wondering if someone here has a better idea of how EA developed in its early days than I do....
Who Is @BasedBeffJezos, The Leader Of The Tech Elite’s ‘E/Acc’ Movement? (www.forbes.com)
At various points, on Twitter, Jezos has defined effective accelerationism as “a memetic optimism virus,” “a meta-religion,” “a hypercognitive biohack,” “a form of spirituality,” and “not a cult.” …...
Prominent Women in Tech Say They Don't Want to Join OpenAI's All-Male Board (www.wired.com)
non-paywall archived version here: archive.is/ztech
Effective Obfuscation (newsletter.mollywhite.net)
Molly White is best known for shining a light on the silliness and fraud that are cryptocurrency, blockchain and Web3. This essay may be a sign that she’s shifting her focus to our sneerworthy friends in the extended rationalism universe. If so, that’s an excellent development. Molly’s great.
rationalist reaches the libertarian conclusion about child marriage (forum.effectivealtruism.org)
saw this pointed out here and felt it deserved it’s own post...
Pivot to AI: Replacing Sam Altman with a very small shell script - our writeup (davidgerard.co.uk)
we’re pretty sure he really did just get kicked for not being enough of an AI doomsday cultist
Eliezer "8.5% more cheerful about OpenAI going forward" with Mira Murati at the helm (nitter.net)
Not 7.5% or 8%. 8.5%. Numbers are important.
the effectively altruistic AI's fans are gonna pipe bomb a Planned Parenthood (forum.effectivealtruism.org)
Yud offers more weight loss discourse (nitter.net)
The Big Lie (nitter.net)
Cold viruses and bitcoin mining oh noes (nitter.net)
Blood Music was way cooler then this just saying.
Good Guy Orange Site refuses to believe rationalists/EAs can be as bad as we're describing and is sure we're just exaggerating (news.ycombinator.com)
Big Yud and the Methods of Compilation (nitter.net)
In today’s episode, Yud tries to predict the future of computer science.
Rationalist saturated prediction market site Manifold has launched a dating app. "Even @aella_girl has signed up 😎💖" (nitter.net)